Forum Discussion

Brendan_Hogan_9's avatar
Brendan_Hogan_9
Icon for Nimbostratus rankNimbostratus
Jun 26, 2009

IRule to selectively allow subnets no longer working

Actually 2 issues:

 

1) We are currently on

 

We used to use the following iRule during maintenance windows to only allow particular subnets to connect. It used to send users not in those subnets to a fallback page. The last 2 times I enabled it users from outside those subnets were able to connect to the application no problem and were not getting the redirect. I know at one point we had upgraded to BIG-IP 9.3.1 Build 37.1 and also had many network changes but since we only need this iRule several times a year per application I really could not attribute a specific change to it no longer working correctly. Any ideas what might need to be changed in this iRule to make it work again?

 

 

when HTTP_REQUEST {

 

if { [IP::addr [IP::client_addr]/24 equals 100.100.100.100] } {

 

pool sa89prod }

 

elseif { [IP::addr [IP::client_addr]/24 equals 200.200.200.200] } {

 

pool sa89prod }

 

elseif { [IP::addr [IP::client_addr]/22 equals 10.10.10.10] } {

 

pool sa89prod }

 

else {

 

HTTP::redirect "https://x.y.com"

 

}

 

}

 

 

2) Not sure where to post this - not an iRule issue but similar to issue above. We used to disable nodes within a pool to prevent any new connections to "bleed" users off for maintenance on a particular server within a pool. We don't want to interrupt a current session. Any Best Practices suggestions as to how to better accomplish this - maybe an iRule that might better accomplish this? We know the users are sometimes leaving their session open. One application in particular times a session out out after 20 minutes of inactivity but best I can tell these are still showing up in the pool statistics as a connection. Basically we disable the node and then watch the pool statistics to have close to zero connections but what I am finding is the connections just seem to keep coming in. Any timeout sessions on BigIP side I should look at?

4 Replies

  • Hi,

     

     

    1. The iRule looks fine. You could create a test VIP and test with that. Try to add logging to see if the match is made. If you're able to reproduce the problem, try adding a OneConnect profile to the test VIP.

     

     

    2. What is the idle timeout of the TCP profile on the virtual server? LTM should reset the client connection after the idle timeout expires. You might also try setting the node to Forced Offline so that only clients with an active TCP connection to the VIP will be allowed to continue. Disabled means that clients with a valid persistence record will still be able to access the VIP.

     

     

    Aaron

     

     

    Aaron
  • 1. I'll try the OneConnect profile

     

    2. Idle timeout was changed for some reason to 60000 - a wee bit high! Time wait is still 2000. How do you force a node offline rather than just disable it? This sounds like exactly what I need to do.
  • For HTTP traffic the TCP idle timeout shouldn't typically need to be set above 5min. Even that can be lowered for high volume web apps. You might do better to create a custom TCP profile for whatever virtual server needs a very long TCP idle timeout and leave the default profile at 300 seconds. You can create a second custom profile for a lower timeout for this particular app if you want.

     

     

    You can force a node offline under Local Traffic >> Nodes. This will take affect for all pool members defined on the node IP address.

     

     

    Aaron