Forum Discussion

Derek_Murphy_38's avatar
Derek_Murphy_38
Icon for Nimbostratus rankNimbostratus
Mar 28, 2011

ARX configuration/network layout

Hi guys,

 

We have a pair of ARX2000's that I'm getting ready to setup. I want to make sure I set them up properly. The following is how we're planning on configuring the ARX's

 

 

5 ports set up in an LACP single arm aggregation to allow for a single link loss (4 ports sending traffic and one for parity). It will reside in our vlan 32, and use vlan tagging. All client networks are route-able to that network. We'll have another 5 ports set up in a similar aggregation cabled into our redundant core switch - with the expectation that this would only ever be used if we lost the primary core switch. Not entirely sure how to set the arx to make sure traffic doesn't send on the 2nd set of 5 interfaces. Our network engineers say that we can use spanning tree for this. The last 2 ports will be set up for in-band management. (are these what will handle heartbeat traffic as well??)

 

 

Is this something that sounds do-able? I'm coming at this from a unix/storage perspective and not so confident on the network layout perspective.

 

 

Cheers,

 

-Derek

 

10 Replies

  • Hell Derek....

     

     

    I don't see anything wrong with your idea, I can share how we are setup, per the advice of the F5 consultant.... We have twin 6509 core switches, configured in VSS, so both switches are logically one. Each one of our ARX's has a 8 member LACP arm to the core, four links to the primary, and 4 links to the secondary switch. All ARX traffic is sent thru this single interface "arm". For heart beat traffic we set up two links interconnecting the two ARX's. (direct ARX to ARX connection) In band managment is handled over the 8 member LACP team back to the core switches. Also worth nothing, we do not trunk these ports. They are access interfaces.

     

     

    Hope this helps....

     

     

    Harold
  • Jim_McCarron_44's avatar
    Jim_McCarron_44
    Historic F5 Account
    Derek,

     

     

    Typically we try to avoid anything using Spanning Tree due to the outages it may cause during reconfiguration. The recommended configuration we typically use is a single channel form each ARX to a single core switch. You could also have multiple VLANs if this is required. The only downside to this approach is if you loose the active core switch, it will also cause the ARX to fail over to the standby. The ARX does not support any sort of resilient link technology today, so you could either go the STP route if you absolutely need to have links to both core switches, or if you are running a Cisco L2 core that supports MCEC (Multi Chassis EtherChannel) / VSS (What Harold is running in the comment above) you can take a single ARX channel and split it across 2 Cisco (1 virtual) switches.

     

     

    Jim
  • Thanks for the responses Harold and Jim. I'm at the point where I'm actually setting up the boxes now.. and my setup changed slightly now after reading some docs a bit.

     

     

    I am figuring on 4 ports lacp going to core switch 1, 4 ports lacp going to core switch 2, 2 ports - heartbeat, 2 ports in-band management. I still want to go the route of having both ARX's connected to both core switches because I don't want to have to fail over the ARX if I lose a core switch.

     

     

    One thing I am curious about is how the proxy-ip addresses work with regards to actual data transmission.

     

     

    The VIP that is going to be bound to the namespace is going to be on our vlan 32 - /22 net (10.10.32.something). Our netapp filers are all on vlan 114 - /23 net - (10.10.114.something). My thinking is that the proxy-IP's will be on vlan 114, and the VIP will be vlan 32. Will that allow the arx to access the filers and retrieve content? The netapps do not have interfaces on vlan 32 - only 114, and at some point we're going to make 114 a non-routable network so I want to make sure that the ARX will be able to communicate to the filers using the 114 addresses.

     

     

    Is this how it works?
  • Jim_McCarron_44's avatar
    Jim_McCarron_44
    Historic F5 Account
    Its not so easy to follow without a diagram. A few questions....

     

     

    Which VLAN will each channel be in? Is Channel1, going to be in VLAN32? and channel 2 in VLAN 114? If so then this is what we call a dual VLAN setup... with Proxy IP addresses in one subnet, and VIPs in another.

     

    You need to ensure that the proxy IP addresses are routable, as they will be used to talk to any authentication services (domain controllers for CIFS, and NIS servers for NFS). Management IP addresses are used for things like gateway monitoring and communication to the Quorum disk (for redundancy)... you don't need separate VLANs for in-band management. If you make the 114 net non-routable, then you could end up breaking authentication. You'll need to at least allow routing for the proxy IP addresses on the ARX to reach outside resources, and external authentication resources will need to be able to communicate back to the proxy IP addresses.

     

     

    I don't understand the comment: "2 ports in-band management", your channels should be your "in-band" ports, so Management IP addresses should be assigned to each VLAN, in addition to you MIP's and VIP's.
  • agreed re: diagram - working on that still.

     

     

    My questions are also stemming from taking the admin class about 4 months ago, and trying to remember everything we covered now - so certain things like the in-band management, I've simply forgot how they work. Let me try to clarify.

     

     

    My original idea was: (which now with inband management being in the channels)..

     

    arx1: gbe1/1 - gbe 1/4 - cabled into core switch 1

     

    arx1: gbe1/5 - gbe 1/8 - cabled into core switch 2

     

    arx1: gbe1/9 - gbe 1/10 - heartbeat

     

    arx1: gbe1/11 - gbe 1/12 - inband management

     

     

    arx2: gbe1/1 - gbe 1/4 - cabled into core switch 1

     

    arx2: gbe1/5 - gbe 1/8 - cabled into core switch 2

     

    arx2: gbe1/9 - gbe 1/10 - heartbeat

     

    arx2: gbe1/11 - gbe 1/12 - inband management

     

     

    Now it seems like it would look more like..

     

    arx1: gbe1/1 - gbe 1/5 - cabled into core switch 1

     

    arx1: gbe1/6 - gbe 1/10 - cabled into core switch 2

     

    arx1: gbe1/11 - gbe 1/12 - heartbeat

     

     

    arx2: gbe1/1 - gbe 1/5 - cabled into core switch 1

     

    arx2: gbe1/6 - gbe 1/10 - cabled into core switch 2

     

    arx2: gbe1/11 - gbe 1/12 - heartbeat

     

     

    with ports 1 - 10 on each switch tagged with vlan 32 and 114.

     

     

    From a client getting data perspective.. we'll have the following:

     

     

    clients = many vlans -> files.domain.com vlan 32 -> netapps vlan 114

     

     

    clients are all over the world, in different offices, on many different vlans. They are going to access all files over files.domain.com - 10.10.32.50 - vlan 32 for example.

     

     

    The arx will be serving data from the following:

     

     

    netapp1 - vlan 114 - 10.10.114.20

     

    netapp2 - vlan 114 - 10.10.114.21

     

     

    Clients should not be allowed to go to the netapp directly for access. We will configure the shares to only allow arx access, but from a routing standpoint, the architects in the group want to make the storage network non-routable someday in the future allowing only machines that have an IP on the 10.10.114 network to send and receive to the netapps.

     

     

    My hope is to have the arx be able to access content on the netapps, via the 10.10.114 vlan while serving content to the clients on the 10.10.32 vlan (all the same channel of ports on the ARX). From the sound of it, it seems like this might be something that isn't possible - unless it can work via static routes to domain controllers - cifs, ldap servers - nfs (no nis only ldap for unix here), and ntp servers?

     

     

    Forgive me if I'm overlooking anything. My previous experience was with a lab environment on an arx 500 so it was a much simpler setup :)

     

     

     

    My gut feeling is that it seems like the in-band management and the proxy-IP's are pretty closely related so if anything it would be proxy-ip/inband management on vlan 114, and VIP on vlan 32 (but all ports would need to carry both vlans)?

     

    Regarding any static routes.. vlan 32 is our server network, so any machine that might be a dependency of the ARX (auth etc..) would be in that network. We also have a management network (vlan 44) that we would want the arx's 114 interfaces to be able to route to incase we shut down services on the 32 network (domain controllers for maintenance lets say..)
  • Jim_McCarron_44's avatar
    Jim_McCarron_44
    Historic F5 Account
    Derek,

     

     

    What model of Cisco switch? Does it support MCEC/VSS? If so then you can treat the channels as virtual and split a single ARX channel across multiple core Cisco switches. If the Cisco switch does not support MCEC/VSS then dual homing channels is not a mode we typically test/deploy with. My only concern would be the potential for MAC address thrashing if things would be learned via multiple channels. I do not know if this would occur, but because this is a mode we don't test, I can't promise what the behavior would be.

     

     

    From the VLAN/routing perspective, here is some detail I hope will help you out (I can't paste in the diagrams), but hopefully it explains the issues when deploying in a dual VLAN mode.

     

     

    Each customer must decide which ARX deployment mode is best for their environment. In dual VLAN mode the default gateway is pointed to the gateway on the client network where the Virtual IP addresses are configured. If storage is on the same subnet as the server facing VLAN where the proxy IP addresses reside then no additional routing needs to be configured, except for authentication. If however, storage is on a different subnet than where the ARX proxy IP addresses reside, then static routes to the filer subnet, or individual host routes to each filer must be configured using the gateway on the sever facing network. Individual host based routes are only required if there will be clients accessing ARX Virtual Servers on the same subnet as storage virtualized by the ARX.

     

     

    The diagram below depicts a typical dual arm deployment. A “client” VLAN/subnet is configured on the 10.1.1.x network. An in-band management IP address (10.1.1.100) and a Virtual IP address (10.1.1.101) are configured on this subnet. Clients access the VIP from many external subnets so the ARX is configured with two default routes which differ only by gateway and cost. The default route to gateway 10.1.1.254 is preferred because it is lower cost (cost of 10), and the default route using gateway 10.1.1.253 is a backup route because of its higher cost (100) and it will only become active if 10.1.1.254 becomes un-available. Default routes should always be used on the client network in a dual VLAN setup because there tends to be more client subnets than storage subnets. Dual VLAN setups will require more manual routes than a one-armed configuration.

     

     

    The storage being virtualized by the ARX in the diagram above is also on a remote network, and requires that the ARX have a route to that destination. A default route cannot be configured because there already is a default route in use on the client facing network. The proper configuration in this case is to point the ARX to the gateway (10.5.5.254) on the server side VLAN/subnet and use a static route. The route should be configured for either the subnet (network based route to 10.6.6.0/255.255.255.0) pointing to where the storage resides, if there are no clients that will access the ARX on this subnet, or to the storage (host based route 10.6.6.1/255.255.255.255 & 10.6.6.2/255.255.255.255) if there are also clients that reside on the same subnet as the storage. A host route would need to be added for each storage device under ARX management in this case.

     

     

    The routing is setup this way in a dual arm deployment to ensure responses to clients are sent back using the client gateway, and traffic destined for storage is sent via the server network gateway. If there are any stateful firewall devices in the network this is critical so that conversations are not dropped because they are asymmetric meaning responses are sent over a different path than the request. A firewall will not allow this to occur, the request and response would need to traverse the same path.

     

    Another consideration for dual arm deployments is that the server subnet must be routable, and the ARX proxy IP addresses must be able to route to external subnets. When the ARX needs to communicate to Active Directory or NIS authentication services, it will initiate the transaction using one of its proxy IP addresses which reside on the server network. If the network does not have a gateway, or if it is configured with a private non-routable subnet in the customer environment, then the ARX will be unable to authenticate clients because it will not be able to contact the authentication services.

     

     

     

     

     

     

     

  • Hi Jim,

     

    That helps a bit.

     

     

    I'm also having some issues getting my gateways set properly.

     

     

    Our network engineers set up the ports on our 6509 so far as single trunked interfaces carrying 3 vlans - 32/112/114. vlan 32 is client/server, and vlan 112/114 is for storage(netapps). 112 I'm not actually using on the ARX yet. The 6509 ports will be set up in a lacp channel soon.

     

     

    On the ARX side, I set up my ports in a lacp channel. Config is below. I'm not doing something right. The behavior I'm seeing is:

     

    I can't add a default gateway for vlan 32. Running ip route 0.0.0.0 0.0.0.0 10.17.32.1 doesn't do anything (show ip route does not change). I assume this is because my proxyIP's are on vlan 114.

     

     

    I can add a default gateway for vlan 114. Running show ip route and show ip route monitor show it added, but then pinging to another IP on that same network fails, and then the route is removed. I don't know if part of my problem is my channel config at the bottom with the cisco side not being channeled yet.

     

     

    Any thoughts as to where my breakdown might be? 114 is a routable network.

     

     

    SUMC01ARX01(cfg) ip route 0.0.0.0 0.0.0.0 10.17.114.1

     

    SUMC01ARX01(cfg) show ip route

     

     

    Destination/Mask Gateway Cost Interface Age

     

    ---------------------------------------------------------------------

     

    0.0.0.0/0 10.16.30.1 128 Mgmt 822250

     

    0.0.0.0/0 10.17.114.1 128 VLAN114 2

     

    10.16.30.0/24 0.0.0.0 0 Mgmt Direct

     

    10.17.32.0/22 0.0.0.0 0 VLAN32 Direct

     

    10.17.32.0/22 0.0.0.0 128 VLAN Direct

     

    10.17.114.0/23 0.0.0.0 0 VLAN114 Direct

     

    10.17.114.0/23 0.0.0.0 128 VLAN Direct

     

     

     

    SUMC01ARX01(cfg) show ip route monitor

     

     

    Destination/Mask Type Gateway Cost Status Details

     

    ------------------------------------------------------------------------------

     

    0.0.0.0/0 Mgmt 10.16.30.1 128 Up Current Gateway

     

    0.0.0.0/0 VLAN 10.17.114.1 128 Up Current Gateway

     

    10.17.32.1 128 Down Unreachable

     

     

     

    SUMC01ARX01(cfg) ping 10.17.114.23

     

    PING 10.17.114.23 (10.17.114.23) 0 data bytes

     

    Ping timeout 10.17.114.23 from 1.5

     

    Ping timeout 10.17.114.23 from 1.5

     

    Ping timeout 10.17.114.23 from 1.5

     

    Ping timeout 10.17.114.23 from 1.5

     

     

    -------10.17.114.23 ping statistics

     

    4 packets transmitted, 0 packets received, 100% packet loss

     

    round-trip min/avg/max 0/0/0 ms

     

    SUMC01ARX01(cfg) show ip route monitor

     

     

    Destination/Mask Type Gateway Cost Status Details

     

    ------------------------------------------------------------------------------

     

    0.0.0.0/0 Mgmt 10.16.30.1 128 Up Current Gateway

     

    0.0.0.0/0 VLAN 10.17.114.1 128 Down No Reply

     

    10.17.32.1 128 Down Unreachable

     

     

     

    SUMC01ARX01(cfg) show ip route

     

     

    Destination/Mask Gateway Cost Interface Age

     

    ---------------------------------------------------------------------

     

    0.0.0.0/0 10.16.30.1 128 Mgmt 822305

     

    10.16.30.0/24 0.0.0.0 0 Mgmt Direct

     

    10.17.32.0/22 0.0.0.0 0 VLAN32 Direct

     

    10.17.32.0/22 0.0.0.0 128 VLAN Direct

     

    10.17.114.0/23 0.0.0.0 0 VLAN114 Direct

     

    10.17.114.0/23 0.0.0.0 128 VLAN Direct

     

     

     

     

    =======configuration=======

     

     

    hostname SUMC01ARX01

     

    vlan 32

     

    description "Vlan 32 - Client"

     

    exit

     

    vlan 114

     

    description "Vlan 114 - Storage"

     

    exit

     

    ip proxy-address 10.17.114.161 255.255.254.0 vlan 114

     

    ip proxy-address 10.17.114.162 255.255.254.0 vlan 114

     

    ip proxy-address 10.17.114.163 255.255.254.0 vlan 114

     

    ip proxy-address 10.17.114.164 255.255.254.0 vlan 114

     

     

    show ip proxy-address

     

     

    config

     

    vlan 114

     

    tag 1/1 to 1/10

     

    exit

     

    vlan 32

     

    tag 1/1 to 1/10

     

    exit

     

     

    interface gigabit 1/1

     

    description "Primary Client/Server Link"

     

    speed 1000-full

     

    no shut

     

    exit

     

     

    interface gigabit 1/2

     

    description "Primary Client/Server Link"

     

    speed 1000-full

     

    no shut

     

    exit

     

     

    interface gigabit 1/3

     

    description "Primary Client/Server Link"

     

    speed 1000-full

     

    no shut

     

    exit

     

     

    interface gigabit 1/4

     

    description "Primary Client/Server Link"

     

    speed 1000-full

     

    no shut

     

    exit

     

     

    interface gigabit 1/5

     

    description "Primary Client/Server Link"

     

    speed 1000-full

     

    no shut

     

    exit

     

     

    interface gigabit 1/6

     

    description "Primary Client/Server Link"

     

    speed 1000-full

     

    no shut

     

    exit

     

     

    interface gigabit 1/7

     

    description "Primary Client/Server Link"

     

    speed 1000-full

     

    no shut

     

    exit

     

     

    interface gigabit 1/8

     

    description "Primary Client/Server Link"

     

    speed 1000-full

     

    no shut

     

    exit

     

     

    interface gigabit 1/9

     

    description "Primary Client/Server Link"

     

    speed 1000-full

     

    no shut

     

    exit

     

     

    interface gigabit 1/10

     

    description "Primary Client/Server Link"

     

    speed 1000-full

     

    no shut

     

    exit

     

     

    int gig 1/11

     

    redundancy protocol

     

    no shutdown

     

    exit

     

     

    int gig 1/12

     

    redundancy protocol

     

    no shutdown

     

    exit

     

     

    ip route 0.0.0.0 0.0.0.0 10.17.32.1

     

     

    int vlan 32

     

    description "In band management IP - 10.17.32.165"

     

    ip address 10.17.32.165 255.255.252.0

     

    no shut

     

    exit

     

     

    int vlan 114

     

    description "In band management IP - 10.17.114.165"

     

    ip address 10.17.114.165 255.255.254.0

     

    no shut

     

    exit

     

     

    ; vlan 32 and 114

     

    channel 1

     

    redundancy protocol 1/1 to 1/8

     

    vlan-tag 32

     

    vlan-tag 114

     

    lacp passive

     

    description "1-8 lacp"

     

    no trap shutdown

     

    exit

     

     

     

  • Jim_McCarron_44's avatar
    Jim_McCarron_44
    Historic F5 Account
    Derek,

     

     

    I believe your issue (at least one of them) is with the channel configuration.

     

     

    ; vlan 32 and 114

     

    channel 1

     

    redundancy protocol 1/1 to 1/8

     

    vlan-tag 32

     

    vlan-tag 114

     

    lacp passive

     

    description "1-8 lacp"

     

    no trap shutdown

     

    exit

     

     

    The key words "redundancy protocol" in the channel is incorrect. If you want "channel 1" to consist of ports 1/2 to 1/8 for "client/server" then you should use the term "members" instead of "redundancy protocol" like this:

     

     

    ; vlan 32 and 114

     

    channel 1

     

    members 1/1 to 1/8

     

    vlan-tag 32

     

    vlan-tag 114

     

    lacp passive

     

    description "1-8 lacp"

     

    no shutdown

     

    exit

     

     

    Your second problem looks to be with the ports (1/11 and 1/12) that will be used for the cluster inter-connect between the two ARX's. (Current config below)

     

     

    int gig 1/11

     

    redundancy protocol

     

    no shutdown

     

    exit

     

     

    int gig 1/12

     

    redundancy protocol

     

    no shutdown

     

    exit

     

     

    I assume you want to have these two ports configured as a channel between the two ARX's and you want to have the reudndancy protocol over them? If this is the case you must remove "redundancy protocol" from both gig ports, and also you must hard code the speed/duplex in order to put these ports into a channel. Like this:

     

     

    int gig 1/11

     

    speed 1000-full

     

    no shutdown

     

    exit

     

     

    int gig 1/12

     

    speed 1000-full

     

    no shutdown

     

    exit

     

     

    Then you should create a second channel, but this time you will use "redundancy protocol" to specify the ports, rather than use "members" like this:

     

     

    channel 2

     

    redundancy protocol 1/11 to 1/12

     

    lacp passive

     

    description "chassis interconnect"

     

    no shutdown

     

    exit

     

     

    Give that a shot. As for routing and gateways you may only have one active default route at a time (the ARX has a single route table), and does not support Equal Cost Multipath Routing. It will be best to put the default gateway on the client network (because there are typically more client networks, than storage). If there is remote storage that the ARX will virtualize (meaning not on the same subnet as the ARX proxy IP addresses), then you must configure a static host route to the storage using a gateway on the subnet where the Proxy IP addresses live instead of a default route.
  • Jim_McCarron_44's avatar
    Jim_McCarron_44
    Historic F5 Account
    Derek,

     

     

    Re: your question below:

     

    Do I need to have 2 in-band management IP's if the only vlan 32 address is going to be the VIPs?

     

     

    Yes. I would highly recommend having in-band Management IP's on any VLANs that the ARX's connect to. There are some functions such as critical route monitoring which will need a Management IP address to probe from. You'll want to setup critical routes on both your VLANs when you setup the redundancy.

     

     

    I would recommend holding off on any global-config until you get the units paired up. Looks like you have not yet enabled redundancy and configured the quorum disk. You'll need to enable redundancy mode, add the peer address, and quorum disk location. You also need to add the key word "redundancy" to one of your VLANs.... I recommend adding it to the proxy IP VLAN like I have done below for one of your ARX's:

     

     

    On ARX1:

     

     

    interface vlan 114

     

    ip address 10.17.114.165 255.255.254.0

     

    redundancy

     

    no shutdown

     

    exit

     

     

    On ARX2:

     

     

    interface vlan 114

     

    ip address 10.17.114.175 255.255.254.0

     

    redundancy

     

    no shutdown

     

    exit

     

     

    Next you need to enable redundancy and configure the required parameters (something like this below): You'll need to provide a proper quorum disk location (the example below shows an NFS location with IP address 1.1.1.1.

     

     

    on ARX1:

     

     

    redundancy

     

    peer 10.17.114.175

     

    quorum-disk 1.1.1.1:/vol/vol1/quorum nfs3tcp

     

    enable

     

    exit

     

     

    on ARX2:

     

     

    redundancy

     

    peer 10.17.114.165

     

    quorum-disk 1.1.1.1:/vol/vol1/quorum nfs3tcp

     

    enable

     

    exit

     

     

    As for this question:

     

    With regards to static routes, you mention that if storage doesn't live on the same network as the proxy IP's I need a static host route. Will this also work for other servers (AD, NTP etc..)?

     

     

    Yes. You can use subnet based routes using a gateway that lives on the Proxy IP subnet. But if you also have clients that will access an ARX VIP, that live on the same subnet as those devices (AD, NIS, etc), then you will want to enter static host IP routes to any authentication services (NIS/AD). NTP /SNMP etc... use Management IP's not Proxy IP addresses.

     

     

    The UNIX/AD mapping is probably beyond what we can cover here, and you'd probably want an F5 SE to look into the environment deeper to cover all the bases, but from a high level the mapping is done behind ARX (usermap.cfg as an example on NetApp). ARX does not talk to LDAP, only AD, but if the filer is doing LDAP for file system security (NFS), then it should be transparent forARX. The ARX does not alter UID/GID from the user into the file system. If you use LDAP for NFS mount based security, then ARX does not currently support that function today.

     

     

    The ARX and upstream L2 switch should use all ports in the channel. As with any L2/L3 device the distribution will typically be some sort of hash on the IP addresses.

     

     

    re: the question:

     

    If I have arx1 1/1 to 1/4 going to switch 1 and arx1 1/5 to 1/8 going to switch 2, if they are all part of the same channel the ARX has no way to determine which switch/interfaces it should be sending packets to if the 2 back-end switches are not configured as a single logical switch? Correct?

     

     

    Yes this is correct.

     

     

    If I had the same setup above, but I had 2 channels, would I be able to achieve a multi homed setup if I was using 2 channels?

     

     

    It may work, but it is not a deployment mode we test. My concern is having 2 different channel in the same subnet/VLAN. This could "potentially" cause MAC address flapping if MAC addresses appear to move from one port to another. Upstream L2 switches are not happy when this occurs. Like I said this may work, but without formally qualifying in this deployment mode I can't comment on what the exact behavoir will be. I can only point out what I see are potential Gotcha's.

     

     

    hope this helps.

     

     

    Jim

     

     

     

     

     

     

     

  • Hi Jim,

     

    Great suggestions and I really appreciate your assistance.

     

     

    I have redundancy set up and working which is fantastic. A strange occurrence happened when I was testing failover. I rebooted arx01 and all of the IP's failed over to ARX02. Once arx01 came back up the inband management IP's were restored. I did the same to arx02 and all IP's flipped over, and upon the first reboot the inband management IP's never came back. I restarted arx02 again and finally got the inband management IP's back. I thought this was kind of strange. In this type of situation what logs would be good to look at to see what may have happened?

     

     

    I'm going to look at installing/configuring the secure agent/cifs authentication next.

     

     

    It sounds that auth for me should be ok. I think we're just using uid/gid permissions. The only question I have is you mention that "if you use LDAP for NFS mount based security then ARX doesn't support that" Does that mean using ldap to prevent an actual mount request (nfs v4 maybe?) vs using a subnet ACL? If so, cool.. because we aren't using that. What we are using is ldap groups for file system permissions.

     

     

    We're also going to change our cabling to have arx01 go to coresw1 and arx02 go to coresw2 as it doesn't seem like there's any supportable way to have it work without VSS. With the new design, if we have an 8 port channel will 4 ports essentially just be for redundancy since the arx2000 caps at 4gb or will all 8 be used and send 500mb/sec? Will the ARX fail over at a certain percentage loss of ports, all port loss, or is it a configurable value?

     

     

    One of the behaviors I noticed when failing between ARX's is that the only IPs to not come back on the other node were the inband management. The proxy IP's from arx01 came online on arx02 (having 8 proxy IP's 4/4). Why is this? I expected to only see the VIP move. Is it due to re-establishing connections with the same IP or something along those lines?

     

     

    Cheers,

     

    -Derek