Forum Discussion

Kris__109062's avatar
Kris__109062
Icon for Nimbostratus rankNimbostratus
Sep 25, 2012

My RAM Cache Irule

Hi,

 

Given the following irule and http profile with ram cache enabled and applied to a VS, why would I still be seeing a lot of occurences to the backend servers of clients GETing the URI when they meet all the caching criteria?

 

I can see the cache is getting hits with "bigpipe profile http http_cache ramcache entry all show"

 

The Ram Cache http profie has no pinned/included/exluded URI's - it is simply enabled with default values, however my irule below makes sure only the URI's I want are cached.

 

profile http http_cache {

 

defaults from http

 

ramcache enable

 

ramcache size 100mb

 

ramcache max entries 10000

 

ramcache max age 600

 

ramcache min object size 600

 

ramcache max object size 50000

 

ramcache ignore client cache control all

 

ramcache aging rate 1

 

ramcache insert age header disable

 

ramcache uri exclude none

 

ramcache uri include none

 

ramcache uri pinned none

 

}

 

class cache_uris {

 

"/path/to/some/uri?category=blah"

 

}

 

when RULE_INIT {

 

set ::objectsize 500

 

}

 

when HTTP_REQUEST {

 

set seenuri 0

 

if { [matchclass [HTTP::uri] equals $::cache_uris] } {

 

log local0. "[HTTP::uri] matched caching criteria. Set seenuri to true for http_response even to cache."

 

set seenuri 1

 

}

 

}

 

when HTTP_RESPONSE {

 

by default cache nothing.

 

CACHE::disable

 

if { $seenuri equals 1 } {

 

set var_content_encoding [string tolower [HTTP::header "Content-Encoding"]]

 

set var_content_length [HTTP::header "Content-Length"]

 

cache the URI only if its gzipped and exceeds 500 bytes

 

if { $var_content_encoding contains "gzip" && $var_content_length > $::objectsize } {

 

CACHE::enable

 

}

 

}

 

}

 

 

 

 

 

 

7 Replies

  • To me the algorithms for deciding whether a URL is cachable or not are, what you call it, black magic? or voodoo?

     

     

    How do you determine that an object should have been cached?

     

     

    Try this maybe: http://www.ircache.net/cgi-bin/cacheability.py

     

     

    My conclusion, when I got intrigued by this was, I hope LTM knows what it doing... and worry only when it caches something it turns out should not, like the user settings or the wrong user or something.... Which is probably the reason why caches are very careful and choose to err on the safe side.
  • It might well be that a query string in the URI qualifies it as uncachable: http://dcommon.bu.edu/xmlui/bitstream/handle/2144/1812/2000-019-web-cachability.pdf?sequence=1

     

     

    but I could be wrong.
  • its definately cached query string and all and receiving hits from clients (ie. the F5 is serving them from the cache.. but not always..)

     

     

    Host: blah.xxxxxxxxxxxx.com URI: /path/to/some/uri?category=blah

     

    | 1956 hits Size: 1011 Rank: 1 Source: 0/0 Owner: 0/0

     

    | Received: 2012-09-25 07:58:24 Last sent: 2012-09-25 08:00:48

     

    | Expires: 2012-09-25 08:08:24 Vary: none Vary count: 1

     

    | Vary user agent: none Vary encoding: none
  • You could try capturing the full headers of an object you think ought to be cached and was not, and then ask, why what this one not cached?

     

     

    Two objects could have the same URI, but be served with different headers which could cause a proxy cache to make a different decision.

     

     

    If you are sure it ought to be cached, then you should open a case with Support and request a bug report be opened.
  • Thanks I may do. The headers are all uniform because they come from a mobile app that we developed, hence we know everything that hits this URI is identical.
  • Hi VirginBlue Kris,

     

     

    You should try using CACHE::uri, or more specifically CACHE::uri [HTTP::path]. This should cache and return the base path (minus the [HTTP::query]), thereby removing the uniqueness.

     

     

    See the CACHE::uri for more information, but this quote from the page may explain some of what you are seeing:

     

     

     

    Cached content by default is stored with a unique key referring to both the URI of the resource to be cached and the User-Agent for which it was formatted. If multiple variations of the same content must be cached under specific conditions (different client), you can use this command to create a unique key, thus creating cached content specific to that condition. This can be used to prevent one user or group's cached data from being served to different users/groups.

     

     

     

    Hope this helps.
  • Good idea..

     

     

    Also.. I found out why some hits didnt hit cache.. we found some requests sent an Authorization header even though the URI doesnt require auth. I just strip that header now with "HTTP::header remove Authorization"