Nginx 99: Cannot assign requested address to upstream

Published on Author gryzli

If you are using Nginx for reverse or caching proxy and you are making some good amount of traffic, soon or later you are going to have issues with the TCP connections between Nginx and your backend. 

You will start getting error messages looking like this: 

[crit] 2323#0: *535353 connect() to failed (99: Cannot assign requested address) while connecting to upstream

The Problem 

When you use Nginx to proxy towards backend, each proxied request is making additional TCP session to the backend.

In TCP/IP each connection is uniquely identified by the following: 

src_ip:src_port –> dst_ip:dst_port

So if you need to open additional TCP session to the backend, you will need unique src port to use. 

The number of dynamic source ports you can get per IP is defined by the “ip_local_port_range”, which could be checked by issuing: 

cat /proc/sys/net/ipv4/ip_local_port_range

and usually is: 32768 – 60999

So basically you are limited to less than 30 000 TCP connections. 

Now if you add the fact that each connection stays at least 60 seconds in TCP:TIME_WAIT state, you will soon realize that exhausting your dynamic src ports will be pretty easy with bit higher traffic. 

This could lead not only to Nginx related problems, but also affecting other applications, which are trying to create TCP session and get dynamic port for the same IP. 

You can check your current connections by issuing: 



During the time I have found multiple solutions to this problem and I’m going to go through all of them. 


Solution 1: Enabling KeepAlive between Nginx and your Backend 

The idea of KeepAlive is to reuse already opened connections. For this to work, you will need to configure both Nginx to support KeepAlive (which is the harder part) and also enable KeepAlive in your backend server (whatever it is ). 

1) Enabling KeepAlive inside Nginx 

You need to do the following settings inside Nginx in order to activate the use of KeepAlive

Add the following to your Location {} directive, where is your proxy_pass : 

proxy_http_version 1.1;
proxy_set_header Connection "";

Define KeepAlive enabled Upstream in http { } config: 

If you are using as backend , your upstream could look like this: 


Modify your proxy_pass to use upstream definition instead of direct address

If your proxy_pass is looking similiar to this: 

proxy_pass ;

It should now be changed to look like: 

proxy_pass http://localhost_80 ; 


2) Enabling KeepAlive in your backend

Finally you should enable KeepAlive in your backend

If you are using Apache web server as a backend, you could add the following to your httpd.conf :


By doing all this, your number of open connections between Nginx and upstream ,should drop significantly.



Solution 2: Setting tcp_tw_reuse to 1

If for some reason you don’t want (or can’t) use KeepAlive between Nginx and the upstream/backend, you could try using the tcp_tw_reuse kernel setting.

At least for me this option worked perfectly and solved the connections problem when KeepAlive is disabled.

You could turn on this option by doing the following:

Edit /etc/sysctl.conf  and add:


then issue:


Solution 3: Using multiple backend ip addresses

If solutions 1 and 2 doesn’t work for you because you have really extreme traffic volumes, then you should think about adding additional backend ip addresses.

The concept is pretty easy and straight forward, what you need to do is the following: 


1) Make your backend listen on multiple IP’s

If your Nginx / Backend are running inside the same machine, this is pretty easy, because you could either use Public IP + localhost ( , or you could add any additional private ip address you like and use them .

So for example if you are going to use the following ips:


, you should configure your backend to listen on all of them


2) Next you need to configure your Nginx upstream to make load balancing

After your backend is configured, you need to configure your upstream {} definition in Nginx, so you use all of the configured ip addresses.

For example it should look like:


By adding such upstream definition, nginx will load balance the requests to backend equally and use the default round-robin mechanism.

If you are using 3 different IP’s, your dynamic port range will be trippled so: x3 .