Skip to content
This repository has been archived by the owner on Apr 21, 2023. It is now read-only.

Document a mechanism to use nginx in front of Apache/mod_pagespeed #369

Open
GoogleCodeExporter opened this issue Apr 6, 2015 · 11 comments

Comments

@GoogleCodeExporter
Copy link

It would be great if mod_pagespeed could write out the processed (i.e. 
combined, minimized and compressed) files without any metadata into the cache. 
The idea is to fetch them from there and serve them directly via a minimal 
webserver such as nginx, which would allow more efficient setups like a nginx 
frontend with an Apache+mod_pagespeed backend.

Original issue reported on code.google.com by dasource...@gmail.com on 11 Jan 2012 at 11:50

@GoogleCodeExporter
Copy link
Author

You are saying that you'd like to be able to run nginx directly off of the 
cached files stored on the file system?

What would it do if the file was not in cache? Would you just serve everything 
with some default headers?

Original comment by sligocki@google.com on 11 Jan 2012 at 11:58

@GoogleCodeExporter
Copy link
Author

Well, if nginx would not find it, I'd pass the entire request to the Apache 
webserver.

My usual setup goes like this:
 - Can nginx access a static file? If yes: Serve it. If not:
 - Pass the request to Apache.

That's a pretty common nginx+Apache setup. I'd like to add another step:
 - Can nginx access the requested file? If yes, serve it. If not:
 - Can nginx find this file in mod_pagespeed's cache? If yes, serve it. If not:
 - Pass the entire request to Apache

Original comment by dasource...@gmail.com on 12 Jan 2012 at 12:11

@GoogleCodeExporter
Copy link
Author

Can nginx be configured with its own cache module, or use squid or varnish?  I 
think that would work very effectively, since mod_pagespeed's rewritten 
resources are cacheable for a year.

The drawback relative to your suggestion is that it would duplicate the 
resources in mod_pagespeed's cache format and the other cache.  The benefit is 
that the two systems would work well together, each doing what they were 
designed for.

On another note, it's been requested numerous times that mod_pagespeed be 
ported to run natively in nginx.  This is a great idea and we'd love to see 
external contribution toward this goal.

Original comment by jmara...@google.com on 12 Jan 2012 at 3:10

@GoogleCodeExporter
Copy link
Author

>Can nginx be configured with its own cache module, or use squid or varnish?
Apart from nginx' gzip_static module and the built-in support for memcached, 
there are no real caching facilities for nginx. Well, except for the FastCGI 
cache. But I assume you don't mean that :) squid or varnish could be integrated 
into the request chain, though.

>The drawback relative to your suggestion is that it would duplicate the 
resources in mod_pagespeed's cache format and the other cache.
Yes, I am fully aware of that. You'll have to store your metadata seperately. 
However, I am convinced that the results would be very well worth it: nginx 
could push the processed files to the 'net, taking off the heat from Apache 
which would be left only with dynamic files and mod_pagespeed processing.

>On another note, it's been requested numerous times that mod_pagespeed be 
ported to run natively in nginx.
I think there's a very early port: <http://forum.nginx.org/read.php?29,204402> 
I haven't checked on that, yet. However, I think the numerous nginx+Apache 
setups would benefit more from the feature proposed in this ticket.

Some other ideas I had while looking through this ticket:
 - ngx_gzip_static is writing out its gzip'ed content to the disk which eliminates the need to re-copress static content on every request. Would this be an idea for mod_pagespeed as well? It would spare the extra call to mod_deflate. Besides, I've heard of some Apache setups where the admin created a e.g. styles.css.gz which is to be serverd when the client supports compression and a styles.css is being requested. This would also mean it could finally make sense to compress static content more aggressively (i.e. gzip - 9)
 - nginx can serve content from memcached. Maybe it would be an idea to write a memcache cache backend? This would also limit disk i/o. From what I see, the cache can be filled quite rapidly. So having a non-persistant cache storage shouldn't be so much of a problem.

Original comment by dasource...@gmail.com on 12 Jan 2012 at 12:52

@GoogleCodeExporter
Copy link
Author

Summary was: Write out plain cache files

Note: another option is to use Varnish, which works pretty well with 
mod_pagespeed for this purpose.

Original comment by jmara...@google.com on 24 May 2012 at 7:48

  • Changed title: Document a mechanism to use nginx in front of Apache/mod_pagespeed
  • Changed state: Accepted

@GoogleCodeExporter
Copy link
Author

This setup feels a bit backwards.. and I say that as a big nginx fan. Yes, 
nginx is likely a bit faster at serving static content, but truth the told, 
Apache with "sendfile" directive on will perform very very well as well. See 
documentation here: 
http://httpd.apache.org/docs/2.2/mod/core.html#enablesendfile

If you're after a high performance cache, then as jmarantz@ pointed out, adding 
a Varnish or Squid in front will give you much better performance and a clean 
integration based on HTTP headers instead of having to rely on mirroring cache 
directories and similar complexity. The problem with this setup is that it 
won't scale beyond a single box.. Relying on HTTP headers allows you to have an 
independent cache tier that's completely decoupled from a single HTTP box or 
Apache process.

Original comment by igrigo...@google.com on 25 May 2012 at 2:26

@GoogleCodeExporter
Copy link
Author

On further investigation, nginx can already be configured to act as a proxy 
cache. Here are a number of good resources on the topic:

- http://wiki.nginx.org/HttpProxyModule
- 
http://serverfault.com/questions/30705/how-to-set-up-nginx-as-a-caching-reverse-
proxy
- http://www.rfxn.com/nginx-caching-proxy/

With above in place, everything should work as expected without any additional 
modifications and/or file sharing between nginx and apache. 

As such, my suggestion for this thread is: wontfix, already works. :)

Original comment by igrigo...@google.com on 25 May 2012 at 5:21

@GoogleCodeExporter
Copy link
Author

[deleted comment]

1 similar comment
@GoogleCodeExporter
Copy link
Author

[deleted comment]

@GoogleCodeExporter
Copy link
Author

https://github.com/pagespeed/ngx_pagespeed

Original comment by devzone...@gmail.com on 12 Oct 2012 at 3:51

@GoogleCodeExporter
Copy link
Author

Checking with original reporter to see if all issues are addressed in the 
comments at this point.

Original comment by jmara...@google.com on 31 Jan 2014 at 3:42

  • Changed state: RequestClarification

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

1 participant