Forum Discussion

3 Replies

  • The idea here is that they're creating two VIPs: one with a specific IP and port listener, and a second with a generic IP (0.0.0.0) and specific port (80). Because the first VIP is more specific, all traffic destined to that IP and port will flow in that direction by default. If the VIP fails, then the less specific VIP will start answering. Personally I think this config is a bit unnecessary, as you now have a (backup) VIP that'll listen on any IP. I'd question the motivation here. It's far more likely that a backend server will fail before a BIG-IP VIP does, so I'd concentrate on making the actual load balanced services more redundant. Of course with GTM you don't need to do any of this as your "backup" LTM VIPs would be listening on different IPs anyway.

     

  • R_Marc's avatar
    R_Marc
    Icon for Nimbostratus rankNimbostratus

    You do this with pools and priority groups.

    For example:

    ltm pool some-https-pool {
        members {
            10.1.1.1:443 {
                address 10.1.1.1
                session monitor-enabled
                state up
            }
            10.2.2.2:443 {
                address 10.2.2.2
                priority-group 100
                session monitor-enabled
                state up
            }
        }
        min-active-members 1
        monitor some-https-monitor
        service-down-action reselect
    }
    

    In the above case, traffic won't go to 10.2.2.2 unless 10.1.1.1 is unavailable; the equivalent of how backup vservers work (but it's much more readable, IMNSHO). You can have as many members in any given priority group as you require, and have as many priority groups as you want (in case you have, say, primary, backup, tertiary). Highest priority wins.

    So if you have 200, 100, 0 (3 priority groups) with min-active-members 1, traffic will go to group 200 until no members are available, then will send traffic to 100, until no members are available, then to 0.