If you typically configure pools with 1 monitor they should be easily spotted from the Pools -> Statistics page (it breaks out each pool member, if its down it will be red). If there are multiple monitors configured you'll need to dig a little bit. Here's what I would do:
First, note that the log message indicates bigd is killing a child process (this is how external monitors run), presumably because it didn't exit in time.
I would locate bigd's child processes to get an idea what kind of external monitors are running (external doesn't mean just the "external" monitor type. Several others are also external, for example any database type monitor, FTP monitor, list continues...).
try this:
pgrep -l -P $(pidof bigd) | awk '{print $2}' | uniq
The output should give you what kind of monitors you'll be wanting to look at (it gives you the script names). You'll probably want to run that a few times or put it into a loop for a few minutes to make sure you get a complete list.
Now that you know the names of the scripts that are running you should be able to work backwards from there. For example, if I see something with 'ftp' somewhere in the name I can assume that its an FTP type monitor. I can then list out which FTP monitors exist on the device with:
tmsh list ltm monitor ftp
I can then use the monitor names to identify which members are failing (you might want to grep for "monitor" in that last command to get just the names rather than the default output which gives you all modified configuration elements). For example, if I have a monitor called "custom_ftp" I can do the following:
tmsh show ltm monitor ftp custom_ftp
Below is the output for one of the members:
root@(3900-4)(cfg-sync Standalone)(Active)(/Common)(tmos) show ltm monitor ftp custom_ftp
---------------------------------
LTM::Monitor /Common/custom_ftp
---------------------------------
Destination: 10.21.0.100:21
State time: down for 0hr:3mins:5sec
| Last error: connect() timed out!
No successful responses received before deadline. @2016.01.26 08:11:54
Here we can see that my pool member at 10.21.0.100 is timing out, I can now address the issue accordingly.
Hope this helps.
-Tim