My favorites | Sign in
Project Home Wiki Issues Source
New issue   Search
for
  Advanced search   Search tips   Subscriptions
Issue 369: Document a mechanism to use nginx in front of Apache/mod_pagespeed
5 people starred this issue and may be notified of changes. Back to list
Status:  RequestClarification
Owner:  igrigo...@google.com


Sign in to add a comment
 
Reported by dasource...@gmail.com, Jan 11, 2012
It would be great if mod_pagespeed could write out the processed (i.e. combined, minimized and compressed) files without any metadata into the cache. The idea is to fetch them from there and serve them directly via a minimal webserver such as nginx, which would allow more efficient setups like a nginx frontend with an Apache+mod_pagespeed backend.
Jan 11, 2012
Project Member #1 sligocki@google.com
You are saying that you'd like to be able to run nginx directly off of the cached files stored on the file system?

What would it do if the file was not in cache? Would you just serve everything with some default headers?
Jan 11, 2012
#2 dasource...@gmail.com
Well, if nginx would not find it, I'd pass the entire request to the Apache webserver.

My usual setup goes like this:
 - Can nginx access a static file? If yes: Serve it. If not:
 - Pass the request to Apache.

That's a pretty common nginx+Apache setup. I'd like to add another step:
 - Can nginx access the requested file? If yes, serve it. If not:
 - Can nginx find this file in mod_pagespeed's cache? If yes, serve it. If not:
 - Pass the entire request to Apache
Jan 11, 2012
Project Member #3 jmara...@google.com
Can nginx be configured with its own cache module, or use squid or varnish?  I think that would work very effectively, since mod_pagespeed's rewritten resources are cacheable for a year.

The drawback relative to your suggestion is that it would duplicate the resources in mod_pagespeed's cache format and the other cache.  The benefit is that the two systems would work well together, each doing what they were designed for.

On another note, it's been requested numerous times that mod_pagespeed be ported to run natively in nginx.  This is a great idea and we'd love to see external contribution toward this goal.

Jan 12, 2012
#4 dasource...@gmail.com
>Can nginx be configured with its own cache module, or use squid or varnish?
Apart from nginx' gzip_static module and the built-in support for memcached, there are no real caching facilities for nginx. Well, except for the FastCGI cache. But I assume you don't mean that :) squid or varnish could be integrated into the request chain, though.

>The drawback relative to your suggestion is that it would duplicate the resources in mod_pagespeed's cache format and the other cache.
Yes, I am fully aware of that. You'll have to store your metadata seperately. However, I am convinced that the results would be very well worth it: nginx could push the processed files to the 'net, taking off the heat from Apache which would be left only with dynamic files and mod_pagespeed processing.

>On another note, it's been requested numerous times that mod_pagespeed be ported to run natively in nginx.
I think there's a very early port: <http://forum.nginx.org/read.php?29,204402> I haven't checked on that, yet. However, I think the numerous nginx+Apache setups would benefit more from the feature proposed in this ticket.

Some other ideas I had while looking through this ticket:
 - ngx_gzip_static is writing out its gzip'ed content to the disk which eliminates the need to re-copress static content on every request. Would this be an idea for mod_pagespeed as well? It would spare the extra call to mod_deflate. Besides, I've heard of some Apache setups where the admin created a e.g. styles.css.gz which is to be serverd when the client supports compression and a styles.css is being requested. This would also mean it could finally make sense to compress static content more aggressively (i.e. gzip - 9)
 - nginx can serve content from memcached. Maybe it would be an idea to write a memcache cache backend? This would also limit disk i/o. From what I see, the cache can be filled quite rapidly. So having a non-persistant cache storage shouldn't be so much of a problem.
May 24, 2012
Project Member #5 jmara...@google.com
Summary was: Write out plain cache files

Note: another option is to use Varnish, which works pretty well with mod_pagespeed for this purpose.

Summary: Document a mechanism to use nginx in front of Apache/mod_pagespeed
Status: Accepted
Owner: igrigo...@google.com
May 24, 2012
Project Member #6 igrigo...@google.com
This setup feels a bit backwards.. and I say that as a big nginx fan. Yes, nginx is likely a bit faster at serving static content, but truth the told, Apache with "sendfile" directive on will perform very very well as well. See documentation here: http://httpd.apache.org/docs/2.2/mod/core.html#enablesendfile

If you're after a high performance cache, then as jmarantz@ pointed out, adding a Varnish or Squid in front will give you much better performance and a clean integration based on HTTP headers instead of having to rely on mirroring cache directories and similar complexity. The problem with this setup is that it won't scale beyond a single box.. Relying on HTTP headers allows you to have an independent cache tier that's completely decoupled from a single HTTP box or Apache process.
May 25, 2012
Project Member #7 igrigo...@google.com
On further investigation, nginx can already be configured to act as a proxy cache. Here are a number of good resources on the topic:

- http://wiki.nginx.org/HttpProxyModule
- http://serverfault.com/questions/30705/how-to-set-up-nginx-as-a-caching-reverse-proxy
- http://www.rfxn.com/nginx-caching-proxy/

With above in place, everything should work as expected without any additional modifications and/or file sharing between nginx and apache. 

As such, my suggestion for this thread is: wontfix, already works. :)
Jan 31, 2014
Project Member #11 jmara...@google.com
Checking with original reporter to see if all issues are addressed in the comments at this point.

Status: RequestClarification
Sign in to add a comment

Powered by Google Project Hosting