The problem : be able to cache a backend response if it took more than 5 seconds. If not, don’t cache it!
A good challenge from @florentsolt. He solves the problem with a nodejs reverse proxy behind nginx, but I prefer a pure nginx implementation ;)
I needed that kind of configuration as a temporary workaround due to a proprietary backend which responds slowly to certain request, and quickly to anothers.
So instead of let the client wait 10 seconds or more, it’s better to render a cached response for next clients for 60 seconds, even if it’s not the more up to date data.
Finding no informations on the subject, here’s my solution (maybe not the better, but it worked for my needs). If you have a better solution (even with another software), you can add a comment :).
I use the “map” directive to set a variable and pass it via the X-Accel-Expires header (from man page : Parameters of caching can also be set directly in the response header. This has higher priority than setting of caching time using the directive. The “X-Accel-Expires” header field sets caching time of a response in seconds. ).
I use two vhosts for this trick, because we need to calculate the response time before send the response to the client.
A request to http://example.com:8080 will now force nginx to cache the response if it take more than 3 seconds via X-Accel-Expires: 60.
If request take less than 3 sec, a X-Accel-Expires: 0 is sent, which totally disable caching.
We can see it with curl :
It took 11.8 sec ! On the next call :
We hit the cache \o/