Forum Discussion

JamesS_40157's avatar
JamesS_40157
Icon for Nimbostratus rankNimbostratus
Dec 02, 2010

Blocking thousands of IP addreses (botnet)

Hi all,

 

 

We have the following iRule on our F5 Big-IP 3400, which allows us to block IP addresses that are listed in an IP list (such as spiders, scrapers etc):

 

 

 

 

 

when HTTP_REQUEST {

 

 

 

if { [matchclass [IP::remote_addr] equals $::blockIps] } {

 

 

HTTP::respond 403 content {

 

 

 

Forbidden Page

 

 

 

}

 

reject

 

log local0. "blocked [IP::remote_addr] requesting [HTTP::uri] as it appears in the blockIP list"

 

}

 

 

}

 

 

 

 

 

 

 

Currently we have around 1,200 in this list, and the load balancers cope well. However, we have recently noticed that a botnet has been scanning our site. One option is to block all suspected botnet IP addresses, by using a database such as one found on this website - http://www.stopforumspam.com/downloads/ - unfortunately, this contains a list of about 73,000 IP addresses!

 

 

 

My question is (as a fairly new F5 user) - is the way we block IP addresses using a lot of overhead, and will adding in an extra 72,000 IPs into an IP address list cause performance issues on the load balancer? Can we make the rule perform better just by using the Big-IP 3400 (we are not prepared yet to go down the ASM route).

 

 

 

Each load balancer can receive up to 7 million requests a day if that helps?

 

 

 

Many many thanks in advance!

 

James.

 

3 Replies

  • Hi James,

     

     

    I don't think ~100k entries in a datagroup will kill a 3400, but it would be good to test it with your highest expected load.

     

     

    If it's a bot network doing a DDOS, I imagine a lot of IP's wouldn't be known in advance though. What are the bots scanning? Is it a web app? Are there any patterns to the requests? You might be able to use an iRule to check the HTTP requests rather than a static (and potentially outdated) list of bad client IPs.

     

     

    ASM would be an ideal option for this as it gives a lot of simpler options for detecting and blocking bots.

     

     

    Aaron
  • Many thanks for the quick reply hoolio!

     

     

    We are not being ddos'ed, but rather being scraped for data with requests that are coming from hundreds of different IP addresses, and many different spoofed user agent headers. There are patterns to the requests but at the same time these can look like normal user generated requests too - therefore we'd have to be very very careful in how we code any irule that looks at patterns.

     

     

    It's quite interesting looking into this - but unfortunately quite time critical as well, so we wouldn't be able to look into the asm option (at least for this particular attack that is happening at the moment). I will look at writing something to check IPs against this botnet database - we already have something similar in place for tor nodes that we download every day.
  • Hi James,

     

     

    If the performance for loading the blacklisted IP's is too low, you could consider annother option that someone recently tried:

     

     

    http://devcentral.f5.com/Community/GroupDetails/tabid/1082223/asg/50/afv/topic/aft/1174760/aff/5/showtab/groupforums/Default.aspx

     

     

    Basically, you could set up a web app which accepts an HTTP request with the client IP set in the query string. The server would respond with an HTTP header indicating whether the client IP was blacklisted or not. Depending on that response you could allow the request through to the pool or drop the request. You'd use an iRule with HTTP::retry to send the sideband request to the blacklist server. See the linked post and the article from Deb in that post for details.

     

     

    Aaron