Bots are everywhere.  Some of them are nice, desirable bots; but many of them are not.  By definition, a bot is a software application that runs automated tasks (scripts) over the Internet.  The desirable ones include examples like Google bots crawling your website so that Google can know what information your site contains and then display your site’s URL in their list of search results.  Most people want this…many people even pay huge money to make sure their site is listed in the top results on Google.  Other bots, though, are not so good.  The more malicious bots are used to attack targets…typically via a Distributed Denial of Service (DDoS) attack.  When many bots are controlled by a central bot controller, they form a “botnet” and can be used to send massive amounts of DDoS traffic at a single target.  We have seen malicious bot behavior many times, but a recent popular botnet attack was seen by the Mirai botnet against several targets.  Let’s just say you didn’t want to be on the receiving end of that attack. 

Needless to say, bot activity is something that needs to be monitored and controlled.  On one hand, you want the good bots to access your site, but on the other hand you want the bad ones to stay away.  The question is, “how do you know the difference?”  Great question.  And the unfortunate answer for many organizations is: “I have no idea.”  The other harsh reality, by the way, is that many organizations have no idea that they have a bot problem at all…yet they have a big one.  Well, the BIG-IP ASM includes several bot-defending features, and this article will outline a feature called “Proactive Bot Defense." 

While the BIG-IP ASM has worked to detect bots for quite sometime now, it’s important to know that it has also been steadily updated to include more automatic defense features.  The BIG-IP ASM uses many different approaches to defending against bad bots, to include things like: bot signatures, transactions-per-second based detection, stress-based detection, heavy URL protection, and CAPTCHA challenges.  All of those approaches are manual in the sense that they require the BIG-IP ASM administrator to configure various settings in order to tune the defense against bad bots. 

However, proactive bot defense automatically detects and prevents bad bots from accessing your web application(s).  Here’s a picture of how it works:

 

Proactive Bot Defense

 

  1. When a browser (user) initially sends a request to your web application, the BIG-IP ASM sees the request and responds with an injected JavaScript challenge for the browser to complete
  2. The JavaScript challenge is then placed in the browser
  3. The browser either responds to the challenge and resends the request or it doesn’t.  If the JavaScript challenge is not sent back, the request is dropped (this indicates bot activity).
  4. Legitimate browsers will answer the challenge correctly and resend the request with a valid cookie.  When this cookie is sent to the BIG-IP ASM, it is signed, time stamped, and finger printed.
  5. After all that validation happens, the request is ultimately sent to the server for processing.

After the initial request is finally sent to the server for processing, any future requests from that browser can bypass the JavaScript challenge because of the valid, signed, time stamped cookie the BIG-IP ASM holds for that valid browser.  The BIG-IP ASM steps through all these actions in order to protect your web application from getting attacked by malicious bots.  In addition to the JavaScript challenge, the ASM also automatically enables bot signatures and blocks bots that are known to be malicious.  When you add up all these bot defense measures, you get what we call “Proactive Bot Defense.”

 

BIG-IP Configuration

Many features of the BIG-IP ASM require you to build a security policy, but Proactive Bot Defense does not.  It is configured and turned on in the DoS profile.  To access the DoS profile from the configuration screen, navigate to Security > DoS Protection > DoS Profiles.  Then, you will see the list of DoS profiles.  Either click the name of an existing DoS profile, or create a new one in order to configure the DoS profile.  Also, on the left menu, under Application Security, click General Settings, and make sure that Application Security is enabled. 

Once you click Proactive Bot Defense, you will be able to configure the settings for the operating mode of the profile.  You will have three options to choose when Proactive Bot Defense is implemented:

  1. During Attack:  This will checks all traffic during a DoS attack and prevent detected attacks from escalating.  DoS attacks are detected using other features of DoS protection in the ASM like Transaction Per Second (TPS) based anomalies (measures if a browser is sending too many requests in a given timeframe) and Stress-Based anomaly detection (measures whether the web server is “stressed” from serving up too much data in a given timeframe).
  2. Always:  This will check all traffic at all times and prevent DoS attacks from starting.
  3. Off:  The system does not perform proactive bot defense.

 

CORS

CORS_exampleCross-Origin Resource Sharing (CORS) is an HTML5 feature that enables one website to access the resources of another website using JavaScript within the browser.  Specifically, these requests come from AJAX or CSS.  If you enable Proactive Bot Defense and your website uses CORS, you should add the CORS URLs to the proactive bot URL whitelist. 

Related to this, but slightly different, is the idea of "cross-domain requests."  Sometimes a web application might need to share resources with another external website that is hosted on a different domain.  For example, if you browse to www.yahoo.com, you might notice that the images and CSS arrive from another domain like www.yimg.com. Cross-domain requests are requests with different domains in the Host and Referrer headers.  Because this is a different domain, the cookie used to verify the client does not come with the request, and the request could be blocked.  You can configure this behavior by specifying the conditions that allow or deny a foreign web application access to your web application after making a cross-domain request. This feature is called cross-domain request enforcement. 

You enable cross-domain request enforcement as part of the Allowed URL properties within a security policy. Then you can specify which domains can access the response generated by requesting this URL (the “resource”), and also configure how to overwrite CORS response headers that are returned by the web server.

There are three options for configuring cross-domain requests:

  1. Allow all requests:  This setting is the most permissive of the three and it allows requests arriving to a non-HTML URL referred by a different domain and without a valid cookie if they pass a simple challenge. The ASM sends a challenge that tests basic browser capabilities, such as HTTP redirects and cookies.
  2. Allow configured domains; validate in bulk:  This setting allows requests to other related internal or external domains that are configured in this section and validates the related domains in advance. The requests to related site domains must include a valid cookie from one of the site domains; the external domains are allowed if they pass a simple challenge. Choose this option if your web site does not use many domains, and keep in mind that it is a good idea to include them all in the lists below.
  3. Allow configured domains; validate upon request:  This setting allows requests to other related internal or external domains that are configured in this section. The requests to related site domains must include a valid cookie from the main domain (in the list below); the external domains are allowed if they pass a simple challenge. Choose this option if your web site uses many domains, and list one main domain in the list below.

If you selected one of the two Allow configured domains options, you will need to add Related Site Domains that are part of your web site, and Related External Domains that are allowed to link to resources in your web site.  You can type these URLs in the form /index.html (wildcards are supported).

While these options are great for cross-domain requests, they do not help with AJAX if "Access-Control-Allow-Credentials" was not set by the client-side code of the application.  To solve the AJAX case, the administrator could choose from one of three options.  They are:

  1. Whitelist the AJAX URLs
  2. Add allow-credentials in the client-side code of the application
  3. Use dosl7.cors_ajax_urls / dosl7.cors_font_urls DB variables.

The database variables mentioned in option #3 above are as follows:

dosl7.cors_font_urls
URLs (or wildcards) of CSS that use @font-face to request fonts from another domain. Both the CSS and the FONT URLs are required here.

dosl7.cors_ajax_urls
URLs (or wildcards) of HTML pages that use AJAX to send requests to other domains. Only the HTML URL is needed here, and not the URL of the CORS request.

Requests to these URLs get redirected, and the TSPD_101 cookie gets added to the query string. For the HTML URLs, this is displayed in the address bar of the browser.  When the requests are sent from the BIG-IP to the back-end server, the additional query string gets stripped off.

Example

@font-face

CSS in host1.com is requesting a font in host2.com:

@font-face {

    font-family: myfont;

    src: url('http://host2.com/t/cors/font/font.otf');

}

h1 {

    font-family: myfont;

    color: maroon;

}

 

To prevent the font request from being blocked, define using this command:

tmsh modify sys db dosl7.cors_font_urls value /t/cors/font/style.css,/t/cors/font/font.otf

 

AJAX

var xhr = new XMLHttpRequest();

xhr.open("GET", "http://host2.com/t/cors/ajax/data.txt");

xhr.send();

 

To prevent the data.txt request from being blocked, define the HTML that contains the JavaScript using the following command:

tmsh modify sys db dosl7.cors_ajax_urls value /t/cors/ajax/,/t/cors/ajax/index.html

 

One more thing to note about AJAX requests: the cookie that is set is valid for 10 minutes by default (5 initial minutes plus the configured Grace Period).  Single Page Applications will send AJAX requests well past this cookie expiration period and these requests will be blocked.
In BIG-IP version 13.0.0 and up, there is support for Single Page Applications.  You can simply check the checkbox in the General section of the dos profile.  Enabling this option causes JavaScript to be injected into every HTML response, and allows supporting these requests.

 

Grace Period

Another configuration item to consider is what’s called the “Grace Period.”  This is the amount of time the BIG-IP ASM waits before it begins bot detection.  The default value is 300 seconds, but this can be changed in the DoS profile settings along with the other items listed above.  The Grace Period allows web pages (including complex pages with images, JavaScript, CSS, etc) the time to be recognized as non-bots, receive a signed cookie, and completely load without unnecessarily dropping requests.  The Grace Period begins after the signed cookie is renewed, a change is made to the configuration, or after proactive bot defense starts as a result of a detected DoS attack.  During the Grace Period, the BIG-IP ASM will not block anything, so be sure to set this value as low as possible while still allowing enough time for a complete page to load. 

 

CAPTCHA

The last thing I’ll mention is that, by default, the ASM blocks requests from highly suspicious browsers and displays a default CAPTCHA (or visual character recognition) challenge to browsers that could be suspicious. You can change the Block requests from suspicious browsers setting by clearing either Block Suspicious Browsers or Use CAPTCHA.

 

 

There are many other bot defense mechanisms available in the BIG-IP ASM, and other articles will cover those, but I hope this article has helped shed some light on the details of Proactive Bot Defense.  So, get out there and turn this thing on…it’s easy and it provides automatic protection!