Forum Discussion

Sergio_Magra's avatar
Sergio_Magra
Icon for Nimbostratus rankNimbostratus
Jul 02, 2013

Load Balancing ESB

Hi,

 

we are in the process of load balancing a SOA infrastructure.

 

We have some doubts about how to do this efficiently:

 

We see that for the BIG-IP LTM, the ESB and its services are seeing as only one IP and port for each node. For example, the pool of ESBs will be esb1:80, esb2:80, esb3:80, and all the services are configured using the same pool members.

 

The different services are identified by an unique URI inside the nodes: esb1:80/services/service1, esb1:80/services/service2, etc.

 

For managing all this stuff we are thinking in the following:

 

  1. Creating a Virtual server for the entire esb.
  2. Creating one pool by each service (the pool members are always the same (esb1:80, esb2:80, esb3:80) but with a different monitor applied to each pool. This monitor have to identify the status of the service in a right way. The reason of doing several instances of the same pool is due to in case that one member stop giving one service, the service is marked down but the other services running in the same member continue giving service.
  3. Creating an HTTP class profile for each service in order to match the specific service URI with the corresponding pool.

 

We think that it will work but we don’t know if we are doing the things efficiently. Maybe there are other options for doing this. This is the reason of this post.

 

 

We will appreciate your suggestions and experiences.

 

Thanks in advance

 

 

Best regards

 

 

Sergio

 

5 Replies

  • So if I understand you correctly, each service (esb1:80, esb2:80, esb3:80) should be in its own pool, have a service-specific monitor applied to its pool, and then you'd use an iRule or HTTP class to steer traffic to the appropriate pool based on the request URI? Correct?
  • Correct, Kevin.

     

     

    We need to know if it is the right way or if it is tried in a inefficient way.

     

     

    Thanks and Best regards

     

     

    Sergio

     

  • This is perfectly reasonable, and not too uncommon. I'll just throw out a few observations:

    1. Each pool should only contain servers for a specific service. So if the request is for "/service/service1", for example, you'd send that request to the ONE pool that can service that request.

    2. HTTP classes are deprecated in11.4. You're probably okay for now on your current version, but that'll eventually change. I'd recommend an iRule-based alternative. Something like the following:

    
    when HTTP_REQUEST {
         switch -glob [string tolower [HTTP::uri]] {
              "/service/service1*" { pool esb_pool_1 }
              "/service/service2*" { pool esb_pool_2 }
              "/service/service3*" { pool esb_pool_3 }
              default { default_pool }
         }
    }
    

    The default_pool is some pool that you've defined for any traffic that doesn't meet the other criteria, or you can simply close the connection. The above also assumes that everything within a service will live within this URI pattern.

    3. Because you're using pools and monitors, you have an opportunity to very easily scale the services (add multiple servers to each pool for redundancy) - if you haven't already thought of that. Otherwise you need to consider what may happen if that one server (or all servers in that pool) become unavailable. Do you close the connection? Do you send some static "maintenance page" HTML? Do you redirect to some other page or site?

  • Kevin,

     

    thanks for the answer. I'm sending some comments/questions about that you say:

     

     

    Regarding 1: Let me say that all the pool members manages all the services. So for each service I will have the same pool members.

     

     

    Regarding 2: Thanks for the iRule. It is much more easy to implement and maintain than HTTP Class profile.

     

     

    Regarding 3: We are thinking in adding an iRule of the kind of "HTTP:: Retry" in order to manage the error responses.

     

     

    On the other hand: What about SLAs? Do you suggest some kind of load balancing method or monitoring to ensure SLA?

     

     

    Thanks and Best regards
  • Most would suggest "Least Connections" I think. Since each server is being used in different pools, I'd maybe suggest a Node-based method (ex. Least Connections Node), and a Node-based monitor (so that a failing server will be removed from all of the pools).