Forum Discussion

W__Tout_99150's avatar
W__Tout_99150
Icon for Nimbostratus rankNimbostratus
Nov 06, 2012

Calling a shell script from within an irule

I have a shell script (/bin/sh) I need to run from within an irule. Given the below scenario:

 

 

1- I extreact a header from the HTTP request

 

2- I extract two parameters from the header; mem_ip and mem_port

 

3- I pass the two parameters to the shell script as arguments

 

4- I use a variable, in the irule, to capture the output of the script

 

 

Can anyone tell me if there is a way to do that and most importantly how?

 

 

Regards

 

9 Replies

  • While you can technically do this, it doesn't mean you should. Reaching into the management shell (management plane) from an iRule (data plane) has several performance AND security implications. The management shell consumes a very small subset of total system memory, and is not multi-processing, so it could never scale to handle traffic loads. Also, creating that "bridge" between the two planes potentially opens you up to vulnerabilities if you don't properly protect the mechanisms.

     

     

    Your best option, in my opinion, would be to employ a sideband call to a remote service (https://devcentral.f5.com/wiki/iRules.SIDEBAND.ashx) and allow it to perform your shell script. You could technically expose some custom service on the BIG-IP (mini web server, netcat, etc.) and reach in from your sideband call, but I'd recommend against that for the aforementioned reasons. If using a remote service, point your sideband call at another virtual server and then load balance (and scale) multiple services.

     

     

    That said, what is your shell script doing? Perhaps the entire process can be done natively in iRules.

     

  • The script simply returns the pool name of the member address contained as a header in the HTTP request
  • A few things to consider:

     

     

    1. Are you allowing the client to provide load balancing information with request headers (IP and port)? If so, this is a potentially dangerous approach. How are you protecting/controlling that data?

     

     

    2. If you're using the values for load balancing, you wouldn't technically need the pool name if you know the node IP and port.

     

     

    3. Can you explain in more detail what you're trying to accomplish? Why do you need the pool name from a give IP and port?

     

  • Why not have the full list of pool_member-pool_name statically loaded in a data group and just have the iRule use it?
  • Good call Mohamed. And then you can use a shell script to asynchronously manage the data group.
  • Kevin,

     

     

    1. The client is sending this information only for certain requests. The destination information is provided to client by the server application. If for whatever reason the information become erroneous, the application will send an update to the client.

     

     

    2. I would still need the pool name because I want to be able to forward the request to the next member in the pool if the intended member is down

     

     

    3. Basically the irule come into play only if the member address header, affinity, is found in the HTTP request. The member IP and port information is extracted and used to check the status of that member. If the status is up, the request is routed to it. If not, it is routed following the lb method configured for the pool.

     

     

    In a simple set up where I have only a single pool associated with the virtual, there would be no problem as the pool would always be known. In my set up, however, I have a number of channels all converging into a single virtual and routing to the appropriate pool is done based on the Host via httpclass profiles. Since the affinity header is only used for certain requests of a given type, there is absolutely no impact on the other services. All works nicely.

     

     

    In some edge cases where the client loses the connection for whatever reason and tries to re-establish it for the same session that this starts breaking down. For a new connection, the client does not send the affinity header as it hasn't received it from the server-application yet. It sends it with the subsequent requests. In the case of a re-connect it tries to re-use the same session information (as designed) and sends the affinity header.

     

     

    What is happening during re-connect is that the irule is kicking in before httpclass is selected. This means that the pool has not been selected yet. This causes the irule to trigger a tcl error when it tries to check for the status of the member:

     

     

    if {[LB::status pool [LB::server pool] member $mem_ip $mem_port] eq "up"} {

     

    pool [LB::server pool] member $mem_ip $mem_port

     

    }

     

     

    As a quick fix I did configure a default pool in the irule and that works fine. However since I have multiple set ups sitting behind the same LB pair and all need to use the same irule logic, I wanted to see if I can apply a single irule to all of them without having to statically load all the pool - member information into the irule. That would not be a good solution from a management perspective as it would require an update to the irule every time a new server is added to the cloud.
  • I wanted to see if I can apply a single irule to all of them without having to statically load all the pool - member information into the irule. That would not be a good solution from a management perspective as it would require an update to the irule every time a new server is added to the cloud.

     

     

     

    As Keven suggested, you can cronjob updating the data on regular basis. It would simply use bigpipe or tmsh ot build list of pool_member --> pool_name, right?

     

     

    This data class would be used by all iRules there should be no need to updated the iRules at all.

     

     

    Unless we are missing something, there is no reason for an iRule calling external services at all. All you need to know is "What the pool name for this node:port?"

     

  • I'd suggest 3 options, in order of preference:

     

     

    1. Cron-managed data group management script - the manipulation of pool data is generally a manual process, so there's probably no reason to make the lookup mechanism real time either. The beauty of this is that you can create a monitor script attached to a "phantom" pool and have a monitor daemon-controlled mechanism that'll save in a config backup; and a process that diligently maintains a list of all IP-pool mapping that's a very fast lookup in your iRule.

     

     

    2. Convert your HTTP class to a data group and add the HTTP logic to your iRule for more control over the process.

     

     

    3. If you absolutely, positively must have this data in real time (even though the pool manipulation isn't), you can spin up a persistent netcat (or other) listener in the shell and use a sideband call in your iRule to call a system self-IP. You must absolutely make sure that your netcat script can only perform specific functions, and absolutely understand the security, performance, and supportability implications of this approach.

     

  • That's right. The only thing I'd need to know is the pool name for the node:port.

     

    Thank you all for the valuable suggestions. I've tried with an external class and it works fine. Next step is to determine how to best update the file containing the pool - member information, cron or otherwise.