How To Optimize Nginx For Maximum Performance

by lifeLinux on August 25, 2011

Nginx is an open-source Web Server. It is a high-performance HTTP server that uses very low server resources, is reliable and integrates beautifully with Linux. In this article, I’ll talk about optimizing your nginx server for maximum performance.

Install Nginx with a minimal number of modules

Run Nginx with only the required modules. This reduces the memory footprint and hence the server performance. Example configuration

./configure --prefix=/webserver/nginx --without-mail_pop3_module --without-mail_imap_module  --without-mail_smtp_module --with-http_ssl_module  --with-http_stub_status_module  --with-http_gzip_static_module

worker_processes

A worker process is a single-threaded process. If Nginx is doing CPU-intensive work such as SSL or gzipping and you have 2 or more CPUs/cores, then you may set worker_processes to be equal to the number of CPUs or cores. Example, i’m running nginx on server has CPU is X3340 (4 cores) then i set worker_processes = 4. If you are serving a lot of static files and the total size of the files is bigger than the available memory, then you may increase worker_processes to fully utilize disk bandwidth.

worker_connections

This sets the number of connections that each worker can handle. You can determine the value by using ulimit -n command which output is something like 1024, then your worker connections would need to be set to 1024 or less but 1024 is a good default setting.
You can work out the maximum clients value by multiplying this and the worker_processes settings

max_clients = worker_processes * worker_connections

Buffers

One of the most important things you need to tweak is the buffer sizes you allow Nginx to use. If the buffer sizes are set too low Nginx will have to store the responses from upstreams in a temporary file which causes both write and read IO, the more traffic you get the more of a problem this becomes. Edit and set the buffer size limitations for all clients as follows:

client_body_buffer_size 8K;
client_header_buffer_size 1k;
client_max_body_size 2m;
large_client_header_buffers 2 1k;

Where,

1. client_body_buffer_size: The directive specifies the client request body buffer size. If the request body is more than the buffer, then the entire request body or some part is written in a temporary file.
2. client_header_buffer_size: Directive sets the headerbuffer size for the request header from client. For the overwhelming majority of requests it is completely sufficient a buffer size of 1K.
3. client_max_body_size: Directive assigns the maximum accepted body size of client request, indicated by the line Content-Length in the header of request. If size is greater the given one, then the client gets the error “Request Entity Too Large” (413).
4. large_client_header_buffers: Directive assigns the maximum number and size of buffers for large headers to read from client request. The request line can not be bigger than the size of one buffer, if the client send a bigger header nginx returns error “Request URI too large” (414). The longest header line of request also must be not more than the size of one buffer, otherwise the client get the error “Bad request” (400).

You also need to control timeouts to improve server performance and cut clients. Edit it as follows:

client_body_timeout   10;
client_header_timeout 10;
keepalive_timeout     15;
send_timeout          10;

Where,

1. client_body_timeout: Directive sets the read timeout for the request body from client. The timeout is set only if a body is not get in one readstep. If after this time the client send nothing, nginx returns error “Request time out” (408).
2. client_header_timeout: Directive assigns timeout with reading of the title of the request of client. The timeout is set only if a header is not get in one readstep. If after this time the client send nothing, nginx returns error “Request time out” (408).
3. keepalive_timeout: The first parameter assigns the timeout for keep-alive connections with the client. The server will close connections after this time. The optional second parameter assigns the time value in the header Keep-Alive: timeout=time of the response. This header can convince some browsers to close the connection, so that the server does not have to. Without this parameter, nginx does not send a Keep-Alive header (though this is not what makes a connection “keep-alive”).
4. send_timeout: Directive assigns response timeout to client. Timeout is established not on entire transfer of answer, but only between two operations of reading, if after this time client will take nothing, then nginx is shutting down the connection.

Pages: 1 2

Related Posts:

{ 5 comments… read them below or add one }

Web Hosting January 18, 2012 at 4:29 pm

Also do not forget to fine tune the tcp_* parameters of nginx.

Reply

Mike August 9, 2012 at 5:03 am

Hey, great article. I enabled the gzip compression on mine and it made a huge difference.

Question though: My server uses nginx as a reverse proxy, it talkes to apache. In this type of a setup, do you know a way to cache static files, like in your configuration:
[code]
location ~* “\.(js|ico|gif|jpg|png|css|html|htm|swf|htc|xml|bmp|cur)$” {
root /home/site/public_html;
add_header Pragma “public”;
add_header Cache-Control “public”;
expires 3M;
access_log off;
log_not_found off;
}
[/code]

Once again, great article, thanks!

Reply

meal February 21, 2013 at 3:13 am

I do not even know how I ended up here, but I thought this post was good.
I don’t know who you are but certainly you’re going to a famous blogger if you are
not already 😉 Cheers!

Reply

seem May 14, 2013 at 12:55 am

I do accept as true with all the concepts you have offered in your post.
They’re really convincing and will definitely work. Still, the posts are very short for novices. May just you please extend them a bit from subsequent time? Thanks for the post.

Reply

Brook July 22, 2013 at 1:04 am

Pretty! This was an incredibly wonderful post.
Thanks for providing these details.

Reply

Previous post:

Next post: