iCall - All New Event-Based Automation System

The community has long requested the ability to affect change to the BIG-IP configuration by some external factor, be it iRules trigger, process or system failure event, or even monitor results. Well, rest easy folks, among the many features arriving with BIG-IP version 11.4 is iCall, a completely new event-based granular internal automation system. iCall gives you comprehensive control over BIG-IP configuration, leveraging the TMSH control plane and seamlessly integrating the data plane as well.

Components

The iCall system has three components: events, handlers, and scripts. At a high level, an event is "the message," some named object that has context (key value pairs), scope (pool, virtual, etc), origin (daemon, iRules), and a timestamp. Events occur when specific, configurable, pre-defined conditions are met. A handler initiates a script and is the decision mechanism for event data. There are three types of handlers:

  • Triggered - reacts to a specific event
  • Periodic - reacts to a timer
  • Perpetual - runs under the control of a daemon

Finally, there are scripts. Scripts perform the action as a result of event and handler. The scripts are TMSH Tcl scripts organized under the /sys icall section of the system.

Flow

Basic flows for iCall configurations start with an event followed by a handler kicking off a script. A more complex example might start with a periodic handler that kicks off a script that generates an event that another handler picks up and generates another script. These flows are shown in the image below.

A Brief Example

We'll release a few tech tips on the development aspect of iCall in the coming weeks, but in the interim here's a prime use case. Often an event will happen that an operator will want to grab a tcpdump on the interesting traffic occurring during that event, but the reaction time isn't quick enough. Enter iCall! First, configure an alert in /config/user_alert.conf for a pool member down:

alert local-http-10-2-80-1-80-DOWN "Pool /Common/my_pool member /Common/10.2.80.1:80 monitor status down" {
   exec command="tmsh generate sys icall event tcpdump context { { name ip value 10.2.80.1 } { name port value 80 } { name vlan value internal } { name count value 20 } }"
}

You'll need one of these stanzas for each pool member you want to monitor in this way. Next, Create the iCall script:

modify script tcpdump {
    app-service none
    definition {
        set date [clock format [clock seconds] -format "%Y%m%d%H%M%S"]
        foreach var { ip port count vlan } {
            set $var $EVENT::context($var)
        }
        exec tcpdump -ni $vlan -s0 -w /var/tmp/${ip}_${port}-${date}.pcap -c $count host $ip and port $port
    }
    description none
    events none
}

Finally, create the iCall handler to trigger the script:

sys icall handler triggered tcpdump {
    script tcpdump
    subscriptions {
        tcpdump {
            event-name tcpdump
        }
    }
}

Ready. Set. Go!

That's one example of a triggered handler. We have many more examples of perpetual and periodic handlers in a variety of use cases in the newly created a iCall wiki, and the codeshare is pre-populated with several use cases for your immediate use and testing. Get ready to jump aboard the iCall automation/orchestration train!

Published Jun 12, 2013
Version 1.0

Was this article helpful?

4 Comments

  • Hi Jason, In the user_alert you have the context as being ip,port,vlan,count yet the iCall script foreach has ip,port,count,vlan... is that correct?
  • I and likely many with me would love another article, this time focussing on how iCall scripts are best deployed and tested. Additioncally documentation on iCall is very sparse and I've run into verious limitations on my first use, none of which are documented anywhere.

     

    Loving the potential but the lack of documentation makes it hard to like at the moment...

     

  • 1- How can i add more then one member in the above script ?

     

    2- How can i add more then one pool into the script ? 3- How will i be notified if one member goes down ?