Forum Discussion

Leslie_South_55's avatar
Leslie_South_55
Icon for Nimbostratus rankNimbostratus
Apr 22, 2008

In-Line or One-Arm LTM Placement

OK, so this may be a little "BigIP 101" but I wanted to ask the question anyway. I have been using v9 bigip for about 2 years now, and during the first implementation, we put the LTM smack dab in the middle of the traffic and set it up to be the default gateway..just like they teach you in class. After several weeks of chasing many ghosts, rabbits, and other unknown anomolies..we decided to pull the bigip out of the traffic routing business and implemented a 'one-arm' config. We have been using this successfully for the 2 year time frame, utilziing SNAT pools, x-forwared-for's, etc...now we have reached a point in our application where a high volume FTP process will become part of the business, and I realized that I may not be able to track and manipulate the FTP traffic as I do HTTP traffic.

 

 

So with all my rambling, my main question is...where SHOULD the bigip live, in line or off to the side. My thoughts were, let routers route and load balancers load balance..take all non-load balanced traffic off the F5.

 

 

The issues mentioned above during the initial config and setup, could have been related to the feature release code we were running, in the 9.2 train.

 

 

All thoughts, suggestions, and words of wisdom are welcome.

 

 

Thanks

 

-L

30 Replies

  • Robin_Mordasie1's avatar
    Robin_Mordasie1
    Historic F5 Account
    When the F5 is configured as the default gateway for backend nodes there are advantages as well as disadvantages. When deploying an F5 unit as a router, or gateway for pool members they see the real client ip address. One problem organizations face with deploying in routed mode is that management traffic for nodes also traverse the F5. The nature of management traffic can represent more bandwidth limiting the capacity of the F5. There are two solutions to this; one is to deploy large enough F5 devices to deal with the additional traffic, or to deploy a dedicated network for management traffic.

     

     

    Generally organizations with mature networks tend to deploy F5 units as routers or gateways for members, however if management traffic represents a significant amount of bandwidth, and a deploying a dedicated management network, or deploying F5 units with higher throughput capacity is not an option, then a one armed configuration can be deployed.

     

     

    Deploying an F5 in a one armed configuration also has its own set of advantages as well as disadvantages. Since members never see the real client ip address, locally configuring security rules based on ip addresses is not possible, and application logs never show the real client ip address. There are mechanisms in place to compensate for this such as inserting an X-Forwarded-For Header, but this is only possible with HTTP and SMTP traffic. One of the major concerns with organizations that deploy on a one armed, or sNAT'ed configuration is that network troubleshooting becomes more difficult.

     

     

    The decision to configure the F5 as a router or one armed device is not a global setting. Organizations can configure a hybrid of sNAT'ed vips as well as non sNAT'ed vips on the same F5 unit. Organizations that are new to the concept of load balancing generally deploy the F5 in a one armed configuration, as this means there are no changes that need to be made to the infrastructure to fit the F5 in the network. Until the organization is comfortable with the concepts of load balancing it is comforting for them to know they can easily "rip it out" and revert to a non load balanced environment.

     

     

    As organizations mature with the concept of load balancing they can phase in non-sNAT'ed vips alongside the sNAT'ed vips, as they become more willing to make the F5 the default gateway for nodes on their network, and deal with management traffic.
  • The DMZ is the ONLY environment in which I have a server in my LTM vlans, everything else is routed to my LTM's from different distribution blocks in the datacenters. We have great success with this deployment. I try to match the 4th octets of the vip and snat so that we can track applications through the load balancers, (if not clients) when the traffic is not easily manipulated like http.

     

     

    The only ghosts I've experienced with my LTM's have been with that which shall not be named....(ok, I'll name it, vlan groups)
  • Thanks for all the replies..The issues I mentioned in my initial post were most likely due to 'lack of experiance" and not using a default forwarding virtual server for all outbound traffic from the server VLAN. All of the nodes we use are windows hosts, and the default deny characteristic of the bigip was causing major issues.

     

     

    I will most likely set up an in-line lab to see what happens, but wanted to just pose the question to get some feedback from the "folks in the field".

     

     

    Thanks again, and keep 'em coming.

     

     

    -L
  • My initial goal was to keep all the server chatter isolated in their own broadcast domain, and route to the adc only when necessary. In our environment, the ideal location was a series of netblocks (different adc's for different business functions) off the core since all distribution blocks must route to and through the core for client & server access. In one of the datacenters there is only 1 distribution block so the adc's are homed there instead of the core to eliminate unnecessary hops.

     

     

    Fortunately for me, I was hired during the vendor selection process for the old gear's replacement, so the design was freshly implemented after the adc was selected.
  • One-armed setups can also be less flexible in regards to routing to different gateways for unsolicited traffic as it leaves the Box. For eg when doing reverse proxy.

     

  • I personally like the hybrid approach when applicable... Why only pick one if you don't have to?

     

     

    This way you have the flexibility of being able to load balance anything you can route to from the LTM but if needed you can offer up a solution for the one armed approach limitations..

     

     

     

  • Hamish's avatar
    Hamish
    Icon for Cirrocumulus rankCirrocumulus
    If you design for in-line, you can always do SNAT/OneArm...

     

     

    Remember, in-line is far easier to debug (If you NAT the backend connection back to the client src IP) because you can tcpdump at the poolmember and their logs will be correct (Yes, you can XFwdFor, but not all protocols are HTTP).

     

     

    H
  • One more thing that nobody stated here , remember that F5's are not having full-state failover. From my 7 year experience I've seen situations and environments that this setup was a disaster since applications were not able to reconnect sessions after failover of F5 cluster. For me having inline is the last resort (i would run cluster in hybrid mode so we just use inline for particular vips when necassary).

     

    • ltmbanter_43291's avatar
      ltmbanter_43291
      Icon for Nimbostratus rankNimbostratus
      Cluster in Hybrid mode, never considered that. Thanks! When we upgrade our 6400's, I'll consider that.
    • Robin_Mordasie1's avatar
      Robin_Mordasie1
      Historic F5 Account
      Really there is no distinction between inline or one armed. In both cases the f5 is a full proxy so wether the egress is on the same vlan as the ingress or if they are in different vlans there isn't a difference to how the traffic is processed. The question really comes down to wether or not we need to snat traffic. If the F5 is not the default gateway for the application servers. If it is, then we do not need to snat.
  • I do not approve ingress SNAT or SNAT pools in any circumstance :p

     

    True L3 IP visibility on a lower level is the cornerstone of smooth troubleshooting. These days, such networks are a minority, but I always advocate for the use of 2 Default Gateways (IP rules) in end-servers, if F5 cannot be the only default gateway.

     

    BigIP with explicit use of SNAT (one-arm/one-VLAN deployment) may work, but there are CAUTIONS:

     

    • Loss of availability to run tcpdump against true client-src-IP in end-servers, and any other device in line after BigIP. This alone, without considering any other facts or variables, makes the deployment unclean/dirty.
    • Risk breaching TCP src-port limits on Server-Side. You can have ~64k concurrent server-side connections from your SNAT-IP to a pool member (dest-ip/port-no combo). It makes it far easier to breach those limits if more clients are stacked up on the same src-IP.
    • Once the limit above is breached, you are likely to opt for 'SNAT Pools' - this will convert your infrastructure into a clusterfuck.
    • Now, as a dedicated administrator of a clusterfuck infrastructure, what kind of evidence can you provide to an external party, to convincingly prove that incident is not linked to a "possible network issue on your side"? What will you say if they ask for a tcpdump against their source IP-address from the end-servers?
    • JRahm's avatar
      JRahm
      Icon for Admin rankAdmin

      I try to be less dogmatic in my advice. As much as we would all love the ideal greenfield deployment, the reality is far from that, so knowing all the options and how to best deal with them is important.

       

    • Harry1's avatar
      Harry1
      Icon for Nimbostratus rankNimbostratus

      so can we convert the one arm mode deployment in two arm mode just pointing app servers gateway to bigip or i need two interfaces like inside and outside?

       

    • Hannes_Rapp_162's avatar
      Hannes_Rapp_162
      Icon for Nacreous rankNacreous

      Hello prak,

       

      The basic pre-requisite for In-line SNATless BigIP deployment is that Client-Side and Server-side traffic do not use identical VLAN tag information. If you already have servers in a given VLAN, it's best to take that existing VLAN number, and configure it in BigIP for use on the Server-Side (Internal). For the Client-Side (External) traffic you should allocate a different VLAN.

       

      If you decide to go ahead with the design changes and need more help, I would gladly help you out if you post a separate question.

       

  • I do not approve ingress SNAT or SNAT pools in any circumstance :p

     

    True L3 IP visibility on a lower level is the cornerstone of smooth troubleshooting. These days, such networks are a minority, but I always advocate for the use of 2 Default Gateways (IP rules) in end-servers, if F5 cannot be the only default gateway.

     

    BigIP with explicit use of SNAT (one-arm/one-VLAN deployment) may work, but there are CAUTIONS:

     

    • Loss of availability to run tcpdump against true client-src-IP in end-servers, and any other device in line after BigIP. This alone, without considering any other facts or variables, makes the deployment unclean/dirty.
    • Risk breaching TCP src-port limits on Server-Side. You can have ~64k concurrent server-side connections from your SNAT-IP to a pool member (dest-ip/port-no combo). It makes it far easier to breach those limits if more clients are stacked up on the same src-IP.
    • Once the limit above is breached, you are likely to opt for 'SNAT Pools' - this will convert your infrastructure into a clusterfuck.
    • Now, as a dedicated administrator of a clusterfuck infrastructure, what kind of evidence can you provide to an external party, to convincingly prove that incident is not linked to a "possible network issue on your side"? What will you say if they ask for a tcpdump against their source IP-address from the end-servers?
    • JRahm's avatar
      JRahm
      Icon for Admin rankAdmin

      I try to be less dogmatic in my advice. As much as we would all love the ideal greenfield deployment, the reality is far from that, so knowing all the options and how to best deal with them is important.

       

    • Harry1's avatar
      Harry1
      Icon for Nimbostratus rankNimbostratus

      so can we convert the one arm mode deployment in two arm mode just pointing app servers gateway to bigip or i need two interfaces like inside and outside?

       

    • Hannes_Rapp's avatar
      Hannes_Rapp
      Icon for Nimbostratus rankNimbostratus

      Hello prak,

       

      The basic pre-requisite for In-line SNATless BigIP deployment is that Client-Side and Server-side traffic do not use identical VLAN tag information. If you already have servers in a given VLAN, it's best to take that existing VLAN number, and configure it in BigIP for use on the Server-Side (Internal). For the Client-Side (External) traffic you should allocate a different VLAN.

       

      If you decide to go ahead with the design changes and need more help, I would gladly help you out if you post a separate question.