#webperf Purpose-built hardware is an integral component for some things but not others. Understanding the difference will help architect networks that deliver highly scalable, top-performing applications.
In the world of web performance there are two distinct functions that are constantly conflated: acceleration and optimization. This confusion is detrimental in that it impedes the ability to architect network services in a way that maximizes each with the goal of delivering highly performant applications.
In the increasingly bifurcated networks of today, acceleration is tightly coupled to hardware-enhanced network components that improve application performance by speeding up specific compute-intensive functions. Services like SSL termination, compression, hardware-assisted load balancing and DDoS protections offer applications a dramatic boost in performance by performing certain tasks in hardware*.
Obviously if you deploy a network component that has been designed to take advantage of purpose-built hardware acceleration on software, you lose the benefits of the hardware acceleration. Form-factor is important to acceleration.
On the other side of the fence is optimization. Optimization is a different technique that seeks to eek out every bit or CPU cycle possible from network and compute resources in order to reduce network overhead (most of these are TCP-related) or reduce the size of data being transferred (WAN optimization and application-specific services like minification and image optimization). Optimizations are functions and are not reliant on specialized hardware**.
This separation between acceleration and optimization, between form-factor and function, is imperative to recognize as it enables more efficient and better performing data center architectures to be designed. It also offers guidance on what functions may be reasonably moved to software and/or virtual form-factors without negatively impacting performance. SSL termination on software of any kind, for example, is unable to equal the performance of hardware-assisted functions. Every gain made by general-purpose hardware is also experienced by hardware counterparts and the latter has the advantage of additional special-purpose processing. Thus when deploying support for SSL, it makes sense to implement in the core network.
Functions such as minification and other application content-specific processing can be deployed in the application network, close to the application it is supporting. Such functions are impacted by compute configuration, of course, but primarily benefit from highly-tuned algorithms that do not require purpose-built hardware and can be deployed on hardware, software or virtual form-factors. While such functions benefit from highly tuned networking stacks on purpose-built hardware, they still provide considerable benefits when deployed on alternative form factors.
Understanding the difference between acceleration and optimization and their relationship to form-factor is a critical step in designing highly available, fast and secure next generation data center networks as that network continues to split into two equally important but different networks.
*You can claim that hardware assisted network functions are going away or no longer necessary because of advances being made by chip makers like Intel and AMD, but read carefully what they're actually doing. They're moving compute intensive processing into hardware to accelerate them. In other words, they're doing what much of the networking industry has done for decades: leverage hardware acceleration to improve performance.
** WAN optimization relies heavily on caching as part of its overall strategy and thus the amount of memory is an important factor in its effectiveness, but memory is rarely specialized.