Forum Discussion

sriramgd_111845's avatar
sriramgd_111845
Icon for Nimbostratus rankNimbostratus
Dec 10, 2008

inspect start of payload

We are planning to add an 'overload' iRule in production, to allow for redirect in case of overload of our application server.

 

 

We want users who are already logged to continue with their session, and redirect only users who login after we turn on the redirect iRule.

 

 

There is no easy way to differentiate between a login packet and already logged in user packets without inspecting the first n (say 100) character for all POSTs.

 

 

This rule will be turned on only during an emergency, and we are going to check CPU utilization in perf. Question is, if I do a

 

if { [HTTP::method] equals "POST" } {

 

if { [HTTP::payload 100] starts_with "" }

 

...

 

 

Is this okay? Is getting 100 characters of the payload and comparing with a string more efficient than say getting 200 characters? i.e. if I inspect the first few characters of the payload, is it more efficient than inspecting the whole payload, or is it the same in terms of performance? If so, is there a more efficient way of doing this?

7 Replies

  • sriramgd, no need to iRule this one, why dont you just use a mode of Load ballance, it'll be more effective.

     

     

    Pherhaps use ratio or any of the dynamic type load ballancing to distribute traffic based on your server performance.
  • Mike_Lowell_108's avatar
    Mike_Lowell_108
    Historic F5 Account
    I agree with jquadri about the overall solution: if you can use Least Connections or Dynamic Ratio (maybe combined with persistence) to load balance new vs. existing users to the least loaded server, that'll be faster than inspecting requests.

     

     

    About your question:

     

    If your app sets a cookie or URI query parameter for users that are logged in that would be better than reading POSTs. Or if you're don't mind making the differentiation more general, and just checking to see if they're already on the site (not neccessarily "logged in") you could enable cookie persistence and check for the BIG-IP cookie to determine if they're new or not.

     

     

    There are some iRules on CodeShare that might give you some ideas on how to handle your situation. They don't have the exact same use-case, but some ideas might be reusable:

     

     

    http://devcentral.f5.com/wiki/default.aspx/iRules/HTTP_delay_and_validate_clients_using_javascript__cookies_when_CPU_is_overloaded.html

     

     

    Also check out this iRule:

     

    http://devcentral.f5.com/Default.aspx?tabid=109

     

     

    Regardless of how you determine users are new vs. existing, I suggest only doing that check once. Even if you read POST data, after you've determined they're already logged in I suggest setting a cookie so you don't have to inspect every POST from the same user.

     

     

    To answer your question specifically, there are different tiers of 'cost' for inspecting more data. For example, inspecting 5 bytes is probably the same as inspecting 10, but inspecting 100 is probably more than 5, and 200 may be the same as 100, but 500 may be more than 200. I'm not sure of the exact cost difference, because usually other factors dominate -- you can determine the speed of your particular iRule using one of the links below -- but I do know that in some cases there's no additional cost for inspecting a few more bytes, though in general it does cost more to inspect more data.

     

     

    Once you have the data, how you do the inspecting is going to be an important factor in performance. If you can check stuff in the header (i.e. URI, cookies) that's faster than inspecting data. If you have to inspect data, whether you use starts_with vs. contains vs. scan will matter. If you're using starts_with, there's probably no reason to check 100 bytes -- just check enough bytes to be unique.

     

     

    Another idea would be to use caching to offload the server:

     

    http://devcentral.f5.com/Default.aspx?tabid=63&articleType=ArticleView&articleId=283

     

     

    Maybe even use an iRule to force caching of more content that wouldn't normally be cachable (or maybe all content except POSTs) during the overload case, so the users at least get a response, even if it's a cached response.

     

     

    You might have an iRule that checks to see if the client is sending an if-modified-since request, and immediately respond with a 304 "not modified" when the server is overloaded so requested for freshness are handled by BIG-IP (of course, you should only do this if the content really doesn't change). So offloading the server when it's overloaded might be a good idea, too.

     

     

    You can find recommendations on how to write fast iRules here:

     

    http://devcentral.f5.com/wiki/default.aspx/iRules/HowToWriteFastRules.html

     

     

    And you can see how to check the performance of your iRule here:

     

    http://devcentral.f5.com/Default.aspx?tabid=63&articleType=ArticleView&articleId=123

     

     

    Other ideas that might be helping (persist users based on payload content, detect web page load time to make automatically engage the special mode you're takling about, etc):

     

    http://devcentral.f5.com/wiki/default.aspx/iRules/LTMMaintenanceWindow.html

     

     

    http://devcentral.f5.com/wiki/default.aspx/iRules/Persist_client_on_response_content_with_stream.html

     

     

    Check page load time:

     

    http://devcentral.f5.com/Default.aspx?tabid=53&view=topic&postid=24456

     

     

    Good luck!
  • Hi Mike,

     

     

    Your Codeshare entry ([url]http://devcentral.f5.com/wiki/default.aspx/iRules/HTTP_delay_and_validate_clients_using_javascript__cookies_when_CPU_is_overloaded.html[/url]) has the description but not the actual iRule. I'm curious to see the rule as it might be useful in HTTP session limiting.

     

     

    When you have a chance, could you check the post?

     

     

    Thanks!

     

    Aaron
  • Mike_Lowell_108's avatar
    Mike_Lowell_108
    Historic F5 Account
    Hmm, try this instead:

     

    http://devcentral.f5.com/wiki/print.aspx/iRules.HTTP_delay_and_validate_clients_using_javascript__cookies_when_CPU_is_overloaded
  • Sorry for the threadjacking, but thanks Mike for the alternate link. It looks like you fixed the standard page as well. That's a novel rule. Thanks for posting it.

     

     

    I think you could potentially use a meta-refresh (which doesn't depend on javascript and shouldn't affect CPU usage on the client) to automatically have the client automatically retry the request. Here is a simple proof of concept for the meta-refresh:

     

     

    [code]

     

    when RULE_INIT {

     

     

    Trigger a meta-refresh every X seconds

     

    set ::refresh_interval 30

     

     

    HTML content containing meta-refresh to the same requested host/uri

     

    set ::html_response_string "

     

     

     

    Retrying...

     

     

     

     

    Meta-refresh to http://\[HTTP::host\]\[HTTP::uri\] in $::refresh_interval seconds

     

     

     

    "

     

    }

     

    when HTTP_REQUEST {

     

     

    if {[HTTP::path] starts_with "/meta"}{

     

     

    log local0. "response string: [subst $::html_response_string]"

     

    HTTP::respond 200 content [subst $::html_response_string]

     

     

    } else {

     

    HTTP::respond 200 content "\r\n200\tClient IP:port: [IP::client_addr]:[TCP::client_port]\

     

    -> VIP IP:port: [IP::local_addr]:[TCP::local_port]\r\n"

     

    }

     

    }

     

    [/code]

     

     

    Aaron
  • Mike,

     

     

    Thanks for the pointers!

     

     

    I am new to using the F5 and iRules, was given this task since IT wanted to find a way to differentiate new and logged in users to have a 'controlled brownout'. Your post is very helpful.

     

     

    The problem is that we cannot uniquely identify a login event with existing cookies. We dont have an option of setting a cookie for login etc. in any near term release.

     

     

    The current problem is that under some exceptional condition our backend database server freezes up, the webservers still work fine, so users keep trying to log in multiple times making the situation worse even for the existing users. When this situation occured, we were not able to login ourselves without rebooting the database server. This obviously lost all existing user sessions too and a chance for us to do some analysis.

     

     

    So this is just an emergency switch to guard against this condition, not a real load balancing issue. We plan to turn it on in case we notice spike in our database which will give us a chance to debug the problem. After we hopefully resolve the issue (somehow), we would turn the 'normal' iRule on again.

     

     

    The existing 'normal' iRule has pools and load balancing in place based on URIs etc.

     

     

    This also means this overload emergency rule will run for a small window of time (hopefully never!).

     

     

    We have several pools for handling different URIs extensions and prepends in the normal rule, which I would copy into the overload rule, so this POST inspection will happen only in the last else i.e. for a more specific set of packets.

     

     

    I am now inspecting only upto the unique 57 characters using a starts_with as you suggested.

     

     

    Also when we thought about it, during this error condition, having some latency for the existing users is also a good thing since we want them to slow down during the exceptional situations (but not be logged out!).

     

     

    Your posts have given me some other ideas. Based on them, if we run into problems in perf tests of the emergency rule, I will be ready with more things if need be.

     

     

    Thanks,

     

    Sriram
  • Mike_Lowell_108's avatar
    Mike_Lowell_108
    Historic F5 Account
    Sriram: Ah, that makes good sense -- I understand now. An idea to help narrow down the window of when the problem starts would be to setup a fake pool that contains the same webservers as the real pool, but setup this fake pool with a health check that simulates a login or some other database work (i.e. send a POST to the webserver that goes through to the database). This way if the database goes crazy the members in the fake pool will get marked down on BIG-IP, which means you'll get alerts right away (i.e. if the BIG-IP is configured to do SNMP traps, you'll get a trap about the fake pool members going down).

     

     

    Depending on your setup, you might also be able to take advantage of this:

     

    http://devcentral.f5.com/Wiki/default.aspx/AdvDesignConfig/ActionOnLog.html

     

    (i.e. if you're sending alerts or syslog entries off-box, you could use ActionOnLog to start a script that helps debug the database problem, for example).

     

     

    hoolio: Good idea -- I hadn't thought about it. However, it's missing the per-user randomized part of the delay -- this is desirable to overcome the 'clumping' of real user activity that tends to happen when an app is under attack. However, your method is definitely better if you don't want randomized delay -- you should put your alternative in CodeShare.