Forum Discussion

Chris_Miller's avatar
Chris_Miller
Icon for Altostratus rankAltostratus
Aug 16, 2010

Why does "Forced Offline" finish off active connections

I've come to realize that most people in my environment believe forcing a node offline ends all connections. In our case, if we have a server with bad data, we don't want anyone with an established connection to that server to stay connected. The documentation below leads me to believe that the only difference between "disable" and "force offline" is that "disable" allows new connections for persistent users. In our case, we don't have any persistent users.

 

 

http://support.f5.com/kb/en-us/solutions/public/7000/500/sol7566.html?sr=9444913

 

"Note: A disabled node continues to process persistent and active connections. It can accept new connections only if the connections belong to an existing persistence session."

 

"Note: A node that is forced offline allows existing connections to time out, but no new connections are allowed."

8 Replies

  • George_Watkins_'s avatar
    George_Watkins_
    Historic F5 Account
    Hi Chris,

     

     

    The disabled option is to drain connections from the selected pool member. This is used in a lot of shops where user sessions are origin server specific. Most of the time, an operator would disable the relevant pool members, wait some length of time, then force the rest of the connections offline.

     

     

    If you want to truly kill off every active connection to a pool member, you need to go to the origin server and manually stop the service in question.

     

     

    I'd say that if you aren't using persistence and your connections are short-lived this shouldn't be much of a problem. If it is a longer-lived connection (streaming media or binary data), then it is a bit more difficult to handle gracefully.

     

     

    -George
  • In addition to George's useful info, you could delete the connection table entries (using b conn) for a specific server if you really wanted LTM to stop allowing any traffic to it.

     

     

    Aaron
  • Hamish's avatar
    Hamish
    Icon for Cirrocumulus rankCirrocumulus
    The other way is to add a monitor to that poolmember that always 'fails'...

     

     

    IIRC there's a way to do force down a poolmember (Or was it a node) from the command line too... (Or there was in 9.0). Emulating the way in which 4.x used to work...

     

     

    H
  • Hamish's avatar
    Hamish
    Icon for Cirrocumulus rankCirrocumulus
    It might be

     

     

    b node down

     

     

    that does it... Sorry, been a while since I had to manually force users off a poolmember...
  • Thanks all...I assumed the failed monitor/conn delete would be options...just tough because the people disabling servers really don't have access to adjust the box configs...they disable the servers through an iControl-using webpage we made.

     

     

    Thanks for the tips!
  • Hamish's avatar
    Hamish
    Icon for Cirrocumulus rankCirrocumulus
    Ah. If they have access via a webpage that uses iControl, just use the iControl calls to add an always fail monitor... And remove it again the same way.

     

     

    H
  • Posted By Hamish on 08/20/2010 06:47 AM

     

    Ah. If they have access via a webpage that uses iControl, just use the iControl calls to add an always fail monitor... And remove it again the same way.

     

     

    H

     

     

    And that I do like!
  • I use the bigpip conn from time to time, very useful. I like the always fail monitor idea, I have to try that one. Anyone have sample code they use for their icontrol webpage to admin VSs/pools?

     

     

    b node down is good as well, just keep in mind you'll kill that node in every other pool it's a member of.. usefull if that's what you want.. but not if multiple apps are hosted on that server and you're looking to just kill that member for the specific app..