Performance of datagroup vs. session subtable lookup
Environment: 10.2.2 HF1 on 6900's
I have a VS that gets about a billion and a half hits per year. This is a highly structured environment where every application URL hosted follows a strict set of guidelines. The first subfield in the URI is going to dictate the pool selection for the request. Users may heavily use 1 to N applications on this environment, and bounce around between them at will.
So I have a datagroup that maps my list of available subfields to pools. The basic question I have is: given my number of requests, would it be more efficient to query the data group on each request to make a pool selection, or to essentially store the datagroup in a session subtable? I have to parse the URI on every request to see what I am dealing with; the question comes in when I have to look up the necessary pool. If I don't find that subfield in session, I can go to the datagroup to get it and then store it in session; would this be more performant than querying the data group every single time? Requests will obviously often not change that subfield as users are working with a particular application; but nothing is going to prevent them from hopping around to any of the 500+ applications I have out in this environment.
Been working with the timing command, but thought I would check here also :)
Thanks,
Jen