The Problem

Application performance testing is a tricky beast.  In the good old days, you would write some code, plop it on a web server and expose it’s IP address to the world and you were ready to roll.  Testing application performance was relatively simple.  You would have your app team do timing on the various components of the application and monitor system resources and response times to see how the application would handle to the normal user and also under load.

The users were connecting to the application server and the application server was connecting to a data store of some sort.  If a performance issues arose, it was hard to determine what the root cause would be without internal knowledge of the application.  A developer would have to come in and do some debugging to see if it was at the application layer or behind the application in the data store.

To avoid fire-drills, monitoring systems would be put into place to monitor the application and alert the application owners so they could be proactive in their response to any issues that came up.

 

MONITOR --------|
                v
CLIENT -------> SERVER -> DATASTORE

 

Then came along the bots, hackers, and holiday shoppers who either caused mischief and in came the need for load balancers, firewalls, and other security devices to help you control  the traffic to your application. 

The application owners would extend their monitoring system to check the external access point as well.

 

MONITOR --------|---------------------------------------------|
                v                                             v
CLIENT -------> WEB OPTIMIZER -> FIREWALL -> LOAD BALANCER -> SERVER -> DATASTORE

 

The added layers of protection and optimization makes determining bottlenecks a bit tougher.  So, what happens If the client reports an issue with the application server and the developer can’t find any issue in the server or database layer?  Then it comes down to dealing with the network team on where the issue is in the network between the client and the application server.

For DevCentral, we tackled this problem by creating “insertion” points into our application path for our monitoring system to make use of so that we could know immediately at which point things in the path that things when awry.

 

MONITOR --------|----------------|-----------|----------------|
                v                v           v                v
CLIENT -------> WEB OPTIMIZER -> FIREWALL -> LOAD BALANCER -> SERVER -> DATASTORE

 

We use many tools for monitoring on DevCentral.  Our primary tool is PRTG for internal server monitoring but we also use a series of scripts for on-demand testing.  I’ve put one of them in the Advanced Design and Config CodeShare under “Performance Testing Through the Network Layers”.  That script will allow you to configure your entry points and run a series of web requests and perform some rudimentary timing tests.  It’s great for page load performance testing!  Below is a sample usage:

   1: PS > .\Time-EntryPoints.ps1 -url /test -groups all | select Host, Min, Max, Avg
   2:  
   3: Host               Min        Max         Avg
   4: ----               ---        ---         ---
   5: Web-Optimizer      0.5723886   1.9792570  0.94863894
   6: FireWall-LB        0.5883204   0.6944670  0.61742544
   7: App-01             0.2155190   0.4100994  0.3327198
   8: App-02             0.3457772   1.5822449  1.2143582

Issues

Nothing comes for free, and adding extra layers of application monitoring does have a cost associated with it.  The above example includes 4 times the number of simulated client requests to the application server and depending on the frequency of these requests, that can in turn cause performance issues due to capacity.  Building in transparent monitoring into the network layer can help ease that problem.  In a future article, I’ll look into what it takes to implement transparent monitoring at the various layers in the network stack.