Last week was golden for me. Three of my projects had major blocking issues, all three were resolved in the course of the week. That makes this week writing time, since two of the three projects were to support writing I want or need to do.

This is the start of that process. The first item to get a break last week was my ARX configuration. When I left off, some of the storage could join the domain and some could not. I needed everything to play nice in the Domain so  that I could pull them all together under the ARX. On Wednesday evening, RDP just dropped to the ADS server. I walked over to the lab and checked from the console what was going on, and it couldn’t get to the local network, let alone anywhere else. I rebooted, and it was better, could get to some things but not everything. Finding this to be terribly odd behavior with no real obvious symptoms like messed up routes or anything, I traced the ethernet cable. And found that it was plugged into a place it shouldn’t have been. While this is clearly a leftover from a previous bit of testing Lori and I were doing, I’m a little confused how it ever communicated. And yet most of the machines I was using for testing were joined to the domain, so it certainly was communicating. Sometimes.

I moved the cable, and everything started playing much more nicely. In fact, that cleared up most of the remaining issues.

Since I had a lot of non-functional items left in the ARX configuration, I opted to wipe the user level configuration on the ARX and start cleanly. It’s pretty easy to wipe an ARX config, it’s simply deletion of a startup file and reboot, so I did that, removed everything from the domain and rejoined it while the ARX was rebooting just to make certain all was communicating cleanly, and 20 minutes later the ARX was configured with both NAS devices behind it, and exposing shares to the domain.


So I  went through and snapped you some screenshots. I kept saying I thought my problems were not the ARX, and the speed with which everything was added and working explains what I meant. Here come the screenshots with everything basically configured. It is not set up to do anything fun yet… I understand you may have forgotten this by now, but the point of this blog series was to show you what that cool stuff was and how to do it. So the rest of this blog shows some screenshots and talks about the architecture, and the next blog will hop right in with configuring shadow-copy.

All of my config was done with the UI. While I did trouble-shooting command-line, the UI gave me the opportunity to show you some pretty pictures, so I used it.

First thing you do is configure a namespace – a container for most of the other items you need to create. It holds publicly advertised shares and back-end filer shares, tells how to communicate with the filers and how to expect end users to communicate with you. It also holds the location that all shares are to use for metadata storage. My namespace is ingeniously named “ARXStorage”. The interface for the namespace is CIFS – I turned of NFS completely in this namespace because if it is included as a communications option, every NAS must have NFS access to every share. For simplicity, I disabled it. Some of our shares do support NFS, but we didn’t need NFS access through the ARX. And trust me, after the pain I went through with ADS, this device was going to use it.


The Namespace then contains “Managed Volumes” – exports from NAS devices that are going to be (eventually) presented by the ARX. I have two of them in this configuration – backup (which maps to one NAS device), and Dell (which maps to the other). Lori and I normally immediately back up the primary to a new NAS when we receive one so that a single PDU failure doesn’t drop our storage environment cold. More on why I bothered to tell you that in a moment. First, the Managed Volumes in Namespace ARXStorage.


There you have it, not much to see. Currently neither is listed as a shadow-copy target, both are enabled and online.

These are shares the ARX is going to manage for us, The /backup share is mapped to /backup1 and is actually actively used – hazards of a growing and changing network – but the name is all over so I’m unwilling to change it. The /Dell share is the default share on Powervault servers - /NASShare.

If we take a look at the volume by drilling down in, we can see…


As you can see, there is a lot going on here. The “files” line is way off base (there is over a terabyte of data on the disk), but I took this screenshot as soon as it was up, so import was likely still going on. Notice that Metadata Free Space and Free Space are the same – this is on a NAS that uses thin provisioning, so I would expect them to be very close.

At this point the back-end is set up. We have two shares imported from two filers, the ARX knows how to communicate with them, and it is doing so well enough to tell us how much space is used and free on the disks. Next we need to add the front-end, a way for users to access these shares through the ARX.

So we first create a Virtual. You don’t have to create a virtual first, you can just start defining exports and if there is no Virtual it will ask you to create one (okay, require you to, not ask, but you know). I’m showing you the Virtual first to keep things logically consistent and understandable. No matter how you create one, you must have the Virtual “first” or there is no way to export shares.


You will need an IP address for each Virtual you create. This is the entry point where users will access your shares – it masquerades as a Filer, in essence. Using my expansive wit, I chose to name this Virtual “ARXStore”. Note that it is already joined to the domain and is up and running. This screen shot was taken after all was configured.

Finally, we are ready to set up the exports. These are publicly exposed shares and/or mount points. I’ve made mine all CIFS for the reasons noted above, and here they are…



So they have a “Domain Name” which is the Virtual name in the domain, what Namespace they’re in (which impacts which backend shares they can see), Volume and Virtual Volume Path. The volume is the backend share, the path is the path that it will present to users.

Once those and  the Virtual show online, you’re in!

Now if you go to any machine that is authenticated to the domain, and type in \\ARXStore in the Explorer (or equivalent) Address bar, you will see this:


There you have it, the two shares exposed in the Virtual. They are accessible and can be mapped or mounted from any machine that can authenticate to internal (which is purposefully few, we don’t like giving out actual information about our network, so it’s locked down).

Next we can start using these exports to do some interesting stuff. Remember I said that we normally block-copy the backup1 share (and a couple of others) to a new NAS? Well we’re going to try setting up shadow-copy next on the ARX to see if that will just copy it for us in the background. But that’s the next blog, not this one.

Until then,