It is truly intriguing when you delve into the whole multicore problem that you find different companies have taken such wildly different approaches to solving the problem and avoiding the pitfalls inherent in parallelism.

RogueWave has decades of experience writing efficient code that serves the needs of their customers. Back when I was a Project Manager for shrink-wrapped software (COTS in the current lingo), we used their libraries to get a lot of work done that would have been much more painful without them. But time marched on and RogueWave's C++ libraries became less useful in the enterprise as the enterprise moved to more developer-friendly languages. I honestly didn't even know that they still existed.

They still sell C++ libraries, but those have nothing to do with their solution to the multicore challenge other than providing a backdrop to their solution, so I will stop reminiscing and give you the information about Hydra, their multicore solution.

Most companies attack the Multicore problem in a generic manner, focusing on modifying the OS or modifying the compiler to take account for the multiple cores that are installed in the system. RogueWave looked at the rapid adoption of Web 2.0 development technologies and the rapid growth in number of cores and found a much more simple (says the guy who wasn't on the development team) approach to the problem.

RogueWave took a new look at how applications are developed and deployed, and came to the conclusion that you can avoid all of the problems with globals and locking and race conditions if you simply parallelize services. So instead of walking through your code and trying to figure out what to parallelize, or modifying the OS and trying to figure out what to parallelize, they simply said "an instance of a service is already parallelizable". To extend that in a manner that should make sense, they start a new instance of your service on another core when needed. There are several reasons why this is appealing, lets look at them.

  • No code changes. Since the system is merely spawning another copy of your service tied to a different core, it doesn't need recompiles or any of that.
  • Service level parallelization means large, long-running transactions can be discretely parallelized. Even if you are running a multi-part transaction, there's much less worry that the system will get bogged down - that core will wait while parts of the transaction finish, but the rest of the system will carry on.
  • Instant benefits. If you already have a Web Services heavy environment, you can start utilizing many cores to their maximum today by installing their application server (more on this later, double-edged sword and all).
  • Internal communications. If you have a service on core three calling a service that ultimately is on core four, no need to take a trip to the network.
  • No worries about traditional multi-core challenges. Of course you'll want to test your solution with their toolset, but no need to recompile and have some pre or post processor change your code. No worries about what it changed and if there are hidden race conditions in the final product. A service runs to completion on its own CPU, just as you would expect, so nothing really changes.

Add to all of this that they have an internal rudimentary load balancer that routes via XML content to the correct application and instance of that application, and you start to get the picture. The load balancer... Or service manager if you prefer... can route between servers or just between cores on a server, but there are better solutions to routing outside the box.

In case you missed it, I'm in love with this solution. But of course no solution is perfect, particularly in a new and growing space, so what are the pitfalls? We'll cover them one at a time also.

  • Web Services only. It has to run in an Application Server or it can't be parallelized. All those traditional fat clients and non-web service thin client apps don't benefit from this solution.
  • Application Server Replacement. They have a light-weight application server that you have to install on the box running the service. Of course you'll need to test your application on it, because it's an unknown.
  • Not Microsoft. Some of you develop MS web services, and this solution doesn't support them at this time - the app server they supply is a Tomcat replacement (not really replacement, see below).
  • Internal Communications. Lori and I were talking about this, and we're not clear how a policy that says "if the client and the service are both on this server, don't XML-encode the call" might violate web services standards. This isn't a known weakness, just something you should be aware of.

But assuming you've got applications that are largely web-service enabled and those services can run on Tomcat, you will find a lot to like here. Content based routing was mandatory to solve the problem they're after - how do you know which web service it's aimed at if you don't - the solution is elegant and has broader implications than the current solution set.

Perhaps the reason I'm so impressed with this solution is that it is largely what we do - we have a virtual IP that has behind it a pool of available resources that are actual unique IP addresses that are available to service requests on the Virtual IP. They have a virtual service that has a pool of available cores behind it that are actual physical cores available to service requests on the virtual service. So what we enable on the network, they enable on a multicore box. Very cool.

As to my "Tomcat replacement" comment, the RogueWave team were very clear that they need to replace Tomcat with their app server on the box that you're running services on, but know full well that your public-facing box will continue to run whatever application server you run today. Makes perfect sense to me, Where speed and XML processing are king, put in their solution, where you have your actual core app and the user will navigate to, keep your current application server.

If you meet their target market, and want to get the most out of your multicore boxes, I highly recommend you check them out. As with all of these, I haven't touched this solution yet, but I'd like to. The one caveat I have is that commercial application servers should be looking to work this type of technology into their offerings soon, and if they do, a third party offering might be redundant.

Until next time...


Share this post :