Forum Discussion

Josh_41258's avatar
Josh_41258
Icon for Nimbostratus rankNimbostratus
Sep 21, 2009

Management of pool members behind LTM

I have a scenario in which several pool members are using the LTM's floating self-ip address as a gateway in order to preserve real client source IP's. I am trying to figure out the best way to handle the management of these backend servers since they are not directly reachable unless I go through the LTM.

 

 

One option is to simply create a virtual server for things like RDP (TCP/3389) so I can reach the boxes via RDP or other services. If this is the case, would you reccomend creating a separate virtual server for every service that I would need to access on the pool members?

 

 

Another option would be to have a second network interface on the pool members which resides on a routable network.

 

 

I also see that nPath routing could be a possible solution but would rather not use this.

 

 

How is everyone else handling this?

 

 

Thanks,

 

 

Josh

5 Replies

  • Hi Josh,

     

     

    One to one VIPs might be the most secure option. There are a few others discussed in this solution:

     

     

    SOL7229: Methods of gaining administrative access to nodes through the BIG-IP system

     

    https://support.f5.com/kb/en-us/solutions/public/7000/200/sol7229.html

     

     

    Aaron
  • If you're doing large transfers of data that don't need to be load balanced (like backups), a separate management NIC on each server would save passing all that traffic through LTM. It's also nice to know that if there is an LTM issue you'll still be able to get to the servers directly. Load balanced clients couldn't be on the admin network though, or I think most servers would use the admin interface to respond back directly to the clients. I don't think you need to use nPath or change the load balancing configuration to do this. Worst case, you might need to use a network device to do source address translation of the admin traffic before it gets to the servers.

     

     

    Also, make sure to not change the default gateway on the servers to the admin gateway.

     

     

    Aaron
  • Well, building separate virtual servers seems like the best way to go until you have to support RPC/DCOM based services which use a large range of ports. I could create an "Administrative" virtual server and have it listen on * (which I don't typically do) and create another virtual server listening specifically on TCP/80 (for the application that will be load balanced). Both could either use the same pool, or two different pools -- one with members listening on every port, and one pool only listening on TCP/80. Is this possible and advised? Sorry if it is confusing.

     

     

    The management NIC idea sounds good, but I do have clients that will need to be load balanced on the management subnet. I'm not sure how Windows will handle that routing.

     

     

    Josh
  • If you're using a VIP to manage each node, you'd probably want to have one virtual server IP address per node. You could configure it on port 0 and then only allow specific hosts/subnets to connect to the admin VIPs. If it's HTTP you could potentially try to map one VIP to specific pool members using host headers or URI mapping, but it's probably simpler to use a one to one mapping of VIP to node for the admin access.

     

     

    Also, I'm pretty sure Windows would reply back to a client on the admin subnet via the admin NIC. So you'd need to have the admin clients on a separate subnet if you don't want to do source address translation on the load balancing VIPs.

     

     

    Aaron