Forum Discussion

Mgullia_176222's avatar
Mgullia_176222
Icon for Nimbostratus rankNimbostratus
Apr 23, 2015

Confine peristence table to one TMM and one SLOT to a viprion multiblade chassis

Hi to all, a customer need to have a Single Node Persistence configuration for a virtual service. I've followed the irule as listed here: https://devcentral.f5.com/wiki/iRules.SingleNodePersistence.ashx ad it worked until...... one blade of the cluster had a reboot..... so....

 

This is the persistence record before the reboot

 

show ltm persistence persist-records virtual devicecom-BB-2001-6 Sys::Persistent Connections universal 192.168.1.91%1158:2001 192.168.1.88%1158:2001 1/1 universal 192.168.1.91%1158:2001 192.168.1.88%1158:2001 1/0 universal 192.168.1.91%1158:2001 192.168.1.88%1158:2001 2/1 universal 192.168.1.91%1158:2001 192.168.1.88%1158:2001 2/0

 

This after the reboot

 

show ltm persistence persist-records virtual devicecom-BB-2001-6 Sys::Persistent Connections universal 192.168.1.91%1158:2001 192.168.1.88%1158:2001 1/0 universal 192.168.1.91%1158:2001 192.168.1.88%1158:2001 1/1 universal 192.168.1.91%1158:2001 192.168.1.89%1158:2001 2/0 universal 192.168.1.91%1158:2001 192.168.1.89%1158:2001 2/1

 

So BLADE1 with TMM1 and TMM2 use note .89 BLADE2 with TMM1 and TMM2 use node .89

 

I've tryed to use "cmp-enabled none" to the vitual to avoid this situation..but it won't work

 

Any suggestion? Shoud i use a differet irule?

 

Thanks

 

8 Replies

  • I think in this instance you might be better off using Priority Group Activation rather than an iRule. PGA will continue to work regardless of 'local' failure events such as a blade reboot. Of course, I'd suggest you do some research on it's behaviour before proceeding.

     

    Generally the downside compared to this rule is that traffic will return to a member that went down when it returns to service, unless you configure Manual Resume, in which case manual intervention is required to restore resilience (assuming two Pool Members). In other words, with Manual Resume, the failure of both nodes (together or at different times) would render the service unavailable (without manual intervention).

     

  • I think in this instance you might be better off using Priority Group Activation rather than an iRule. PGA will continue to work regardless of 'local' failure events such as a blade reboot. Of course, I'd suggest you do some research on it's behaviour before proceeding.

     

    Generally the downside compared to this rule is that traffic will return to a member that went down when it returns to service, unless you configure Manual Resume, in which case manual intervention is required to restore resilience (assuming two Pool Members). In other words, with Manual Resume, the failure of both nodes (together or at different times) would render the service unavailable (without manual intervention).

     

  • Hi unfortunately the downside of "Priority Group Activation" is exaclty what the customer wants to avoid. So the solution is to write an irule that can afford this situation. I've read instead a guy that used "destination address affinity", but i didn't catch how it works....

     

  • what about carp persistence?

    sol11362: Overview of the CARP hash algorithm

    https://support.f5.com/kb/en-us/solutions/public/11000/300/sol11362.html
     configuration
    
    root@(ve11c)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm virtual bar
    ltm virtual bar {
        destination 172.28.24.10:80
        ip-protocol tcp
        mask 255.255.255.255
        pool foo
        profiles {
            tcp { }
        }
        rules {
            qux
        }
        source 0.0.0.0/0
        source-address-translation {
            type automap
        }
        vs-index 39
    }
    root@(ve11c)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm pool foo
    ltm pool foo {
        members {
            200.200.200.101:80 {
                address 200.200.200.101
            }
            200.200.200.102:80 {
                address 200.200.200.102
            }
            200.200.200.111:80 {
                address 200.200.200.111
            }
            200.200.200.112:80 {
                address 200.200.200.112
            }
        }
    }
    root@(ve11c)(cfg-sync In Sync)(Active)(/Common)(tmos) list ltm rule qux
    ltm rule qux {
        when CLIENT_ACCEPTED {
      persist carp 1
    }
    when SERVER_CONNECTED {
      log local0. "c=[IP::client_addr]:[TCP::client_port] s=[IP::server_addr]:[TCP::server_port]"
    }
    }
    
     /var/log/ltm
    
    [root@ve11c:Active:In Sync] config  tail -f /var/log/ltm
    Apr 23 20:47:32 ve11c info tmm[5649]: Rule /Common/qux : c=172.28.24.8:38656 s=200.200.200.101:80
    Apr 23 20:47:32 ve11c info tmm1[5649]: Rule /Common/qux : c=172.28.24.8:38657 s=200.200.200.101:80
    Apr 23 20:47:32 ve11c info tmm[5649]: Rule /Common/qux : c=172.28.24.8:38658 s=200.200.200.101:80
    Apr 23 20:47:32 ve11c info tmm1[5649]: Rule /Common/qux : c=172.28.24.8:38659 s=200.200.200.101:80
    Apr 23 20:47:32 ve11c info tmm[5649]: Rule /Common/qux : c=172.28.24.8:38660 s=200.200.200.101:80
    Apr 23 20:47:32 ve11c info tmm1[5649]: Rule /Common/qux : c=172.28.24.8:38661 s=200.200.200.101:80
    Apr 23 20:47:32 ve11c info tmm[5649]: Rule /Common/qux : c=172.28.24.8:38662 s=200.200.200.101:80
    Apr 23 20:47:32 ve11c info tmm1[5649]: Rule /Common/qux : c=172.28.24.8:38663 s=200.200.200.101:80
    Apr 23 20:47:32 ve11c info tmm[5649]: Rule /Common/qux : c=172.28.24.8:38664 s=200.200.200.101:80
    Apr 23 20:47:32 ve11c info tmm1[5649]: Rule /Common/qux : c=172.28.24.8:38665 s=200.200.200.101:80
    
  • John_Alam_45640's avatar
    John_Alam_45640
    Historic F5 Account

    Note: On a VIPRION system, you can mirror connections between blades within the cluster (intra-cluster mirroring) or between the clusters in a redundant system configuration (inter-cluster mirroring).

     

    You may need to enable both, connection mirroring (in intra-cluster mode) and, persistence mirroring to get this done.

     

    Which version is your customer using?

     

  • Thanks for your support! I've tried the carp persistence with no results :(

     

    For intra-cluster mirroring option let me check! The version is LTM 11.2.1 HF14

     

    Thanks again!

     

    M.G.

     

  • Thanks for your support! I've tried the carp persistence with no results :(

     

    For intra-cluster mirroring option let me check! The version is LTM 11.2.1 HF14

     

    Thanks again!

     

    M.G.

     

  • I've cheked ..... Now i got in Cluster Option -> Network mirroring withing Cluster.

    This is the virtual server config:

    ltm virtual virtual-1-36ZF7UD.FloatingLoadBalancer.LB_Device-FB-2001-6-4566 {
    cmp-enabled no
    destination 10.23.153.205%4566:2001
    ip-protocol tcp
    mask 255.255.255.255
    persist {
        persistence_PriorityFailover {
            default yes
        }
    }
    pool pool-1-36ZF7UD.FloatingLoadBalancer.LB_Device-FB-2001-6-4566
    profiles {
        fastL4 { }
    }
    snat automap
    vlans-disabled
    }
    
    ltm pool pool-1-36ZF7UD.FloatingLoadBalancer.LB_Device-FB-2001-6-4566 {
    members {
        node-10.23.153.197-4566:2001 {
            address 10.23.153.197%4566
            priority-group 1
            session monitor-enabled
            state up
        }
        node-10.23.153.199-4566:2001 {
            address 10.23.153.199%4566
            priority-group 1
            session monitor-enabled
            state up
        }
    }
    monitor tcp-5-16 
    service-down-action reselect
    }
    ltm persistence universal persistence_PriorityFailover {
    app-service none
    defaults-from universal
    rule ARIWATCH-PriorityFailover
    timeout 7200
    }
    ltm rule ARIWATCH-PriorityFailover {
    when CLIENT_ACCEPTED { persist uie 1 }
    }
    

    So, you tell me that , since the mirror persistence is not enabled, the cluster (blade 1 and blade 2) does not share the same persistence records? So why in normal conditions (blade1 and blade2 in cluster) the persistence is binded always to one server or the onother? See above ( 192.168.1.89)

    show ltm persistence persist-records virtual virtual-1-2VUMQQU.FloatingLoadBalancer.devicecom-BB-2001-6
    Sys::Persistent Connections
    universal  192.168.1.91%1158:2001  192.168.1.89%1158:2001  2/0
    universal  192.168.1.91%1158:2001  192.168.1.89%1158:2001  2/1
    universal  192.168.1.91%1158:2001  192.168.1.89%1158:2001  1/0
    universal  192.168.1.91%1158:2001  192.168.1.89%1158:2001  1/1
    Total records returned: 4*
    

    M.G.