The iRules CodeShare on DevCentral is an amazingly powerful, diverse collection of iRules that perform a myriad of tasks ranging from credit card scrubbing to form based authentication to, as in today's example, limiting the number of HTTP sessions allowed. While the codeshare is outstanding, it is a collection of code that has been contributed over the last several years. As such, some of it is written for older versions, like 9.x, where we didn't have some of the powerful, efficient commands and tools that we do currently within iRules. That is where the idea for a CodeShare Refresh series came from...getting those older v9-esque rules moved into modern times with table commands, static namespace variables, out of band connections and all of the other added benefits that come along with the more modern codebase. We'll be digging through the CodeShare fishing out old rules and reviving them, then posting them back for future generations of DevCentral users to leverage. We'll also try to comment on why we're making the changes that we are, so you can see what has changed between versions in the process of us updating the code. With that, let's get started.

First I'll post the older snippet, then the updated version, ready to go into the wild in v11.x. The new rule in its entirety and the link to the older version can be found below.

 

Previously, in pre CMP compliant v9 iRules, it was relatively common place to set global variables. This is a big no-no now, as it demotes connections out of CMP, which is a large performance hit. So while the old iRule's RULE_INIT section looked like:

   1: when RULE_INIT {
   2:  set ::total_active_clients 0
   3:  set ::max_active_clients 100
   4:  log local0. "rule session_limit initialized: total/max: $::total_active_clients/$::max_active_clients"
   5: }

The newer version updated for v11 looks like:

   1: when RULE_INIT {
   2:   set static::max_active_clients 100
   3: }

Note the use of the static:: namespace. This is a place to safely store static information globally available form that will not interfere with CMP. These values are, as the namespace indicates, static, but it's extremely valuable in many cases like this one where we're setting a cap for the number of clients that we want to allow. Also note that there is no active clients counter at all, due to the fact that we've changed things later on in the iRule. As a result of this it made no sense to log the initialization line from the older iRule either, so we've trimmed the RULE_INIT event down a bit.

Next up, the first half of the HTTP_REQUEST event, in which the max_active_clients is compared to the current number of active clients.

First the v9 code from the CodeShare:

   1: when HTTP_REQUEST {
   2:  ;# test cookie presence
   3:  if {[HTTP::cookie exists "ClientID"]} {
   4:    set need_cookie 0
   5:    set client_id [HTTP::cookie "ClientID"]
   6:    ;# if cookie not present & connection limit not reached, set up client_id
   7:  } else {
   8:    if {$::total_active_clients < $::max_active_clients} {
   9:      set need_cookie 1
  10:      set client_id [format "%08d" [expr { int(100000000 * rand()) }]]

Now the v11 code:

   1: when HTTP_REQUEST {
   2:   # test cookie presence
   3:   if {[HTTP::cookie exists "ClientID"]} {
   4:     set need_cookie 0
   5:     set client_id [HTTP::cookie "ClientID"]
   6:     # if cookie not present & connection limit not reached, set up client_id
   7:   } else {
   8:     if {[table keys -subtable httplimit] < $static::max_active_clients} {
   9:       set need_cookie 1
  10:       set client_id [format "%08d" [expr { int(100000000 * rand()) }]]

The only change here is a pretty notable one: Out with global variables, in with session tables! Here we introduce the table command, which was released with v10, that gives us extremely efficient access to the session table. In this iRule all we need is a counter, so we're using a subtable called httplimit and adding a new record to that subtable for each session coming in. Then, with the table keys command we can quickly and efficiently count the number of rows in that table, which gives us the number of HTTP sessions currently active for this VIP. Note that the rest of the code stayed the same. There are many ways to do things in iRules, but I'm trying not to fiddle with the logic or execution of the rules in this series more than necessary to update them for the newer versions.

So now that we're using the table command to do the lookups, we should likely use it to set and increment the counter as well. That occurs in the other half of the HTTP_REQUEST event. First v9:

   1: # Only count this request if it's the first on the TCP connection
   2: if {[HTTP::request_num] == 1}{
   3:   incr ::total_active_clients
   4: }
   5: ;# otherwise redirect
   6: else {
   7: HTTP::redirect "http://sorry.domain.com/"
   8: return
   9:  
  10:   }
  11: }

Again you can see the use of the global variable, and the incr command. Next is the v11 update:

   1: # Only count this request if it's the first on the TCP connection
   2: if {[HTTP::request_num] == 1}{
   3:   table set -subtable httplimit [IP::client_addr]:[TCP::client_port] "blocked"
   4:   set timer [after 60000 -periodic { table lookup -subtable httplmit [IP::client_addr]:[TCP::client_port]
   5: } else {
   6:   HTTP::redirect "http://sorry.domain.com/"
   7:   event CLIENT_CLOSED disable
   8:   return
   9: }
  10: }

As you can see things here have changed quite a bit. First of all, here is the way we're using the table command to increment our counter. Rather than keeping a single counter, we are adding rows to the session table in a particular subtable, as I mentioned before. We're using the client's IP & port to give us a unique identifier for that client. This allows us to do the table keys lookup to count the number of active clients. We're also instantiating a timer here. Using the after -periodic command we are setting up a loop (non blocking) that will touch the entry we've just created every 60 seconds. This is because the entry in the session table has a timeout of 180 seconds, which is the default. Now...we could have made that entry permanent, but that's not what we want. When counting things using an event based structure it's important to take into account cases where a particular event might not fire. While it's rare, there are technically cases where the CLIENT_CLOSED event may not fire if circumstances are just right. In that case, using the old structure with just a simple counter, the count would be off and could drift. This timer, which you'll see in the last section is terminated in CLIENT_CLOSED along with removing the entry for this session in the table (effectively decrementing the counter), ensure that even if something wonky happens, the count will resolve and remain accurate. A bit of a concept to wrap around, but a solid one, and this introduces far less overhead than you'd gain back by moving this rule to CMP.

Also note that we're disabling the CLIENT_CLOSED event if the user is over their limit. This is to ensure that the counter for their IP/port combo isn't decremented.

Next is the HTTP_RESPONSE event, which remains entirely unchanged, so the v9 & v11 versions are the same:

   1: when HTTP_RESPONSE {
   2:  # insert cookie if needed
   3:  if {$need_cookie == 1} {
   4:    HTTP::cookie insert name "ClientID" value $client_id path "/"
   5:  }
   6: }

And last, but not least, the CLIENT_CLOSED event. First v9, with our simple counter and the nearly infamous incr -1:

   1: when CLIENT_CLOSED {
   2:  ;# decrement current connection counter for this client_id
   3:  if {$::total_active_clients > 0} {
   4:    incr ::total_active_clients -1
   5:  }
   6: }

And now the updated version for v11:

   1: when CLIENT_CLOSED {
   2:   # decrement current connection counter for this client_id
   3:   after cancel $timer
   4:   table delete -subtable httplimit [IP::client_addr]:[TCP::client_port]
   5: }

The two main things to note here are that we're not doing an if, since we have confidence that we're not going to drop the counter below 0, since that's not possible with this method, and the way we're decrementing things. Note that we're not decrementing at all. We're deleting the row out of the subtable that represents the current HTTP Session. As such, it won't be counted on the next iteration, and poof...decremented counter. We're also dumping the periodic after that we spun up in the HTTP_REQUEST section to keep our entry pinned up in the session table until the session actually terminated.

So there you have it, a freshly updated version of the HTTP Session Limiting iRule out of the CodeShare. Hopefully this is helpful and we'll continue refreshing this valuable content for the ever expanding DevCentral community. Here is the complete v11 iRule, which can also be found in the CodeShare:

   1: when RULE_INIT {
   2:   set static::max_active_clients 100
   3: }
   4:  
   5: when HTTP_REQUEST {
   6:   # test cookie presence
   7:   if {[HTTP::cookie exists "ClientID"]} {
   8:     set need_cookie 0
   9:     set client_id [HTTP::cookie "ClientID"]
  10:     # if cookie not present & connection limit not reached, set up client_id
  11:   } else {
  12:     if {not ([table keys -subtable httplimit] > $static::max_active_clients)} {
  13:       set need_cookie 1
  14:       set client_id [format "%08d" [expr { int(100000000 * rand()) }]]
  15:  
  16:       # Only count this request if it's the first on the TCP connection
  17:       if {[HTTP::request_num] == 1}{
  18:         table set -subtable httplimit [IP::client_addr]:[TCP::client_port] "blocked"
  19:         set timer [after 60000 -periodic { table lookup -subtable httplmit [IP::client_addr]:[TCP::client_port] }
  20:       }
  21:     } else {
  22:       HTTP::redirect "http://sorry.domain.com/"
  23:       event CLIENT_CLOSED disable
  24:       return
  25:     }
  26:   }
  27: }
  28:  
  29: when HTTP_RESPONSE {
  30:   # insert cookie if needed
  31:   if {$need_cookie == 1} {
  32:     HTTP::cookie insert name "ClientID" value $client_id path "/"
  33:   }
  34: }
  35:  
  36: when CLIENT_CLOSED {
  37:   # decrement current connection counter for this client_id
  38:   after cancel $timer
  39:   table delete -subtable httplimit [IP::client_addr]:[TCP::client_port]
  40: }
Comments on this Article
Comment made 27-Dec-2011 by hoolio 2495
It would make sense to force closure of the TCP connection when redirecting a client to a blocking page if you're only checking the first HTTP request on each TCP connection. As it is, a client could ignore the redirect and continue making more HTTP requests on the same TCP connection to bypass the iRule logic.

At some point it would make sense (for someone :) to add validation of each HTTP request on the connection to handle different clients who might be connecting from behind the same proxy.

Aaron
0
Comment made 27-Dec-2011 by Colin Walker 3814
It certainly isn't a hardened security rule, you're right. I'll likely take a cut at a more robust version incorporating more use cases, but for this series I'm looking to keep things intact the way they were in the CodeShare so people can see what has changed between versions.

#Colin
0
Comment made 03-May-2013 by Haris 0
Thanks, Colin.
Do you have any idea how to test whether it will be working?
I've tried to change max_active_clients as 1, and then accessed from 2 different PC. But all connections are successful.

~haris~
0
Comment made 29-Jan-2014 by Daan 0
We are using another version of such an iRule and it works great, but I think there is typo in the iRule:

set timer [after 60000 -periodic { table lookup -subtable httplmit [IP::client_addr]:[TCP::client_port] }

should be:

set timer [after 60000 -periodic { table lookup -subtable httplimit [IP::client_addr]:[TCP::client_port] }

The 'i' is missing in httplimit.
0
Comment made 15-Mar-2015 by kridsana 640
I got error On line 19 . Should I add more bracket ']' to end of the line ? like this set timer [after 60000 -periodic { table lookup -subtable httplimit [IP::client_addr]:[TCP::client_port] }]
0
Comment made 02-Aug-2016 by Willys46 0

kridsana,

Yes, the syntax dictates that you need the extra bracket at the end.

I know this is an old case but we are struggling with the same issue. We are running V12 which I understand should be able to run V11 irules fine. There was one change made to the rule we have and that is the following:

17: if {[HTTP::request_num] == 1}{ 18: table set -subtable httplimit [IP::client_addr]:[TCP::client_port] "blocked" 19: set timer [after 60000 -periodic { table lookup -subtable httplmit [IP::client_addr]:[TCP::client_port] } 20: }

TO: 17: if {not ([table keys -subtable httplimit -count] > $static::max_active_clients)} { 18: table set -subtable httplimit [IP::client_addr]:[TCP::client_port] "blocked" 540 19: set timer [after 60000 -periodic {table lookup -subtable httplimit [IP::client_addr]:[TCP::client_port] } ] 20: }

I didn't put this rule in place so I am not sure what the reason was for setting it this way, but I assume it was to address what you were saying, hoolio. That said, our code doesn't appear to be working as expected. I think this may have something to do with it, and the second reason I think it may be balking is because of a concurrency issue. My understanding is that this rule gets run per HTTP session as they are made to the F5. But what happens if say 2000 clients attempt to make a connection at the same time? Wouldn't each of those essentially query the database and see it in the same state, and thus each think they need to be added to the allowed pool?

Also, can someone explain what the line "table set -subtable httplimit [IP::client_addr]:[TCP::client_port] "blocked" 540" is doing? I was under the impression that we were adding IPs to the approved list, but this seems like it is adding those who are blocked, which seems backwards to me.

0
Comment made 22-Sep-2017 by brad 375

Shouldn't the code in line 5 be using the '-count' option to get the current table size? As it reads it is returning the value of the first entry in the table...

12: if {not ([table keys -subtable httplimit -count] > $static::max_active_clients)} {

Also, I'm a bit confused on the logic using the 'after 60000' to retouch the table entry. If the entry times out in 180 seconds, it will disappear by itself and thinking logic to have the client retouch it with each HTTP_REQUEST will refresh it. This seems to be connection based as the CLIENT_CLOSE drops the entry from the table even though the user still has a web session (and cookie).

I may be trying to use this incorrectly. I have a web service where I need to restrict the number of overall concurrent users. The application uses JSESSIONID so thinking I could use that existing cookie. I can then add a table entry for the user if I don't have them and retouch it for every transaction to keep it active. Once they go away it will timeout and the entry will drop, opening a session slot for someone else.

0
Comment made 09-Jan-2018 by anatolyel 0

will it work for multiple domains?

0
Comment made 4 months ago by kridsana 640

I'm not sure if it depend on HTTP session limit. this irule delete table when they have some connection close in their session but it not mean that client exiting application, Am I correct?

So Is it may cause application allow for more active user than we expect?

(ie. expect max session 1) - when client 1 just use application. they finish get index.html and that connection is closed = entry is remove. so then client 2 use application too because entry in table not less than 1. Result in two session using application which more than we expect.

Can we remove Event CLIENT_CLOSED from this iRule and then let's wait for entry in table timeout (180 second by default)? This way it's may not allow more active session than we expect. (the issue is new session can't enter even old session finish using application. They have to wait till entry timeout)

0
Comment made 4 months ago by brad 375

This is the code I finally put in place to limit sessions (not connections). It uses an application set cookie. Many times applications will have this JSESSIONID is common. Otherwise a cookie can be inserted if the application doesn't set one. The first call will not have a cookie-- indicating a new user with a new session. Once that cookie exists it will honor the session even if the limit has been reached.

# devcentral codeshare seems to be based on a connection not a session as it maintains the source port and IP
# as web sessions may initiate multiple sessions from the same IP but different ports.
# In this case the application maintains a session cookie which is specified in static sessioncookie variable.
# (could always add a HTTP_RESPONSE section to create/insert a cookie if the application doesn't maintain one).
# If the cookie doesn't exist this must be a new user who hasn't established a server session yet. In that case
# check the current session table size and if we are under the limit, let them in.  The server will generate the session cookie.
# Subsequent calls will be permitted regardless of the size of the active session table- don't want to drop a user who is in session.
# Active session table entries are aged for responsewait seconds.  If the user is slow the entry will drop out and could
# permit another user to begin a session even though this user returns and recreates their sessioncookie entry. 
# In that case it will go 'overlimit'.  This might need tweaking depending on the application use.


when RULE_INIT {
  set static::max_active_clients 2000
  set static::responsewait 300
  set static::sessioncookie "ANONYMOUS_COOKIE"
}

when HTTP_REQUEST {
#   this gets the pool name assuming it is in the /Common/ path so it strips the leading /Common/  (or on/ trail)
  set hstable "limit-[findstr [LB::server pool] "on/" 3]"
  if {[HTTP::cookie exists $static::sessioncookie]} {
    if {[HTTP::request_num] == 1}{
# giving them responsewait seconds for their next transaction otherwise they will be removed from the table.
# but the logic will add them back in again on their next transaction as they have a sessioncookie.
# note that the lookup will reset the expiration timer.
      if { [table incr -subtable $hstable -mustexist [HTTP::cookie value $static::sessioncookie]] == "" } {
#        log local1. "Entry for expired [HTTP::cookie value $static::sessioncookie] [IP::client_addr]:[TCP::client_port] recreated."
        table set -subtable $hstable [HTTP::cookie value $static::sessioncookie] 0 $static::responsewait
      }
    }
  } else {
    # if cookie not present & connection limit reached, sorry
    if {[table keys -subtable $hstable -count] >= $max_active_clients} {
      log local0. "SESSION LIMIT REACHED. CLIENT IP [IP::client_addr] WAS GIVEN SORRY RESPONSE.  Session count/limit: [table keys -subtable httplimit -count]/$max_active_clients"
#      HTTP::redirect "http://sorry.domain.com/"
      HTTP::respond 200 content "[ifile get ifile_iCarsLimit]" "Cache-Control" "no-cache" "Pragma" "no-cache" "Connection" "Close"
#      HTTP::respond 200 content "SorrySorry, we are currently busy serving other customers and cannot serve you at this moment, but please try again in a couple minutes!" "Cache-Control" "no-cache" "Pragma" "no-cache" "Connection" "Close"
      event disable
    }
  }
}

# simple code would be to simply allow the cookie to be set in the request above.
# the below will see if the session cookie is being created and if so will set the table entry for it along with the timeout.
# then the code above does a lookup which resets the timeout.  It will issue an error if the lookup fails, which means that 
# the entry aged out and a subsequent request from that client came in.  Log and recreate - a way to see how much this happens.

when HTTP_RESPONSE { 
  if {[HTTP::header exists "Set-Cookie"]} { 
    foreach cookievalue [HTTP::header values "Set-Cookie"] { 
      if {$cookievalue starts_with $static::sessioncookie} { 
# some applications add the cookie with each transaction - increment if there otherwise create
        if { [table incr -subtable $hstable -mustexist [HTTP::cookie value $static::sessioncookie]] == "" } { 
          table set -subtable $hstable [findstr $cookievalue "=" 1 ";"] 0 $static::responsewait
#          log local1. "Entry for session $cookievalue IP: [IP::client_addr]:[TCP::client_port] created."
        }
      }
    } 
  }
}




code to dump the table
  if {[URI::decode [HTTP::uri]] starts_with "/F5/report"} {
    if {[URI::decode [HTTP::uri]] starts_with "/F5/reportall"} {
      set rpt ""
      foreach key [table keys -notouch -subtable $hstable] {
        set remain [table timeout -subtable $hstable -remaining $key]
        set value [table lookup -subtable $hstable -notouch $key]
        append rpt "\n"
      }
      append rpt "
$static::sessioncookieTxnsTimeout
$key$value$remain

Timeout is $static::responsewait seconds (use /F5/reportall-reset to reset.)" set rfc1123date [clock format [clock seconds] -format "%a, %d %h %Y %T GMT" -gmt true] HTTP::respond 200 content "Session Limit Table

Session Limit Table - $hstable

$rpt" "Cache-Control" "no-cache" "Pragma" "no-cache" Date $rfc1123date Expires $rfc1123date "Connection" "Close" if {[URI::decode [HTTP::uri]] starts_with "/F5/reportall-reset"} { table delete -all -subtable $hstable } event disable } else { set sessct [table keys -subtable $hstable -count] HTTP::respond 200 content "ReportCurrent Session Count/Limit is $sessct/$max_active_clients." "Cache-Control" "no-cache" "Pragma" "no-cache" "Connection" "Close" event disable } }

I don't want to confuse things by introducing something else, but on the other hand, happy to share what I ended up implementing.

0