Forum Discussion

Joe_Hsy_45207's avatar
Joe_Hsy_45207
Icon for Nimbostratus rankNimbostratus
Mar 08, 2007

HTTP:version not working? Breaks CreditCardScrubber irule on chunked sites

Hi,

 

 

I posted a message a few weeks ago indicating the somehow turning on Web Accelerator Module broke an iRule. Turns out it had nothing to do with the WAM, but rather whether the website had chunking or not. In fact, adding WAM seem to fix the issue by actally adding a content-length header.

 

 

The problem was that in the iRule, there is a section that changes the HTTP:version from "1.1" to "1.0" in order to disable chunking. Unfortunately, the HTTP:version setting appears to be broken (seen on both BIG-IP 9.2.5 Build 5.1 and BIG-IP 9.2.5 Build 5.1 BIG-IP 9.4.0 Build 617.5). I logged the version right after setting it:

 

 

HTTP::version "1.0"

 

log local0. "HTTP:version = [HTTP::version]"

 

 

and it still showed versiona as "1.1".

 

 

Has anyone else run into this problem? Maybe there are other ways to disable chunking (or maybe work with the chunking).

 

 

Thanks!

 

 

//Joe

3 Replies

  • I think some (or all?) of the values of HTTP (and other?) commands are cached. I know HTTP::uri is cached, so even if you update it, the value displayed in a subsequent log statement isn't changed. For example:

    
    when HTTP_REQUEST {
       log local0. "original version: [HTTP::version]"
       log local0. "original URI: [HTTP::uri]"
       HTTP::version 1.0
       HTTP::uri /new/uri/
       log local0. "original version: [HTTP::version]"
       log local0. "original URI: [HTTP::uri]"
    }

    The log output shows no change in either the version or the URI:

    Mar 9 11:03:43 tmm tmm[1085]: Rule change_version_uri_rule : original version: 1.1

    Mar 9 11:03:43 tmm tmm[1085]: Rule change_version_uri_rule : original URI: /

    Mar 9 11:03:43 tmm tmm[1085]: Rule change_version_uri_rule : original version: 1.1

    Mar 9 11:03:43 tmm tmm[1085]: Rule change_version_uri_rule : original URI: /

    However, a tcpdump of the request received by the BIG-IP versus the request sent to the web server shows both changes were made:

    Original request:

    GET / HTTP/1.1

    Host: test

    User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB;rv:1.8.1.2) Gecko/20070219 Firefox/2.0.0.2

    Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5

    Accept-Language: en-gb,en;q=0.5

    Accept-Encoding: gzip,deflate

    Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7

    Keep-Alive: 300

    Connection: keep-alive

    Pragma: no-cache

    Cache-Control: no-cache

    Request sent to the web server:

    GET /new/uri/ HTTP/1.0

    Host: test

    User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.8.1.2) Gecko/20070219 Firefox/2.0.0.2

    Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5

    Accept-Language: en-gb,en;q=0.5

    Accept-Encoding: gzip,deflate

    Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7

    Keep-Alive: 300

    Connection: keep-alive

    Pragma: no-cache

    Cache-Control: no-cache

    If you disable HTTP/1.1 on a browser, does the rule work? This should have the same effect as the rule changing 1.1 requests to 1.0: preventing chunked responses.

    If you capture a tcpdump of a failure, do you see the BIG-IP using 1.1 to the node? Do you see the node sending chunked responses?

    Aaron
  • Hi Aaron,

     

     

    You are right on with that assessment. Right after posting my message, I was told that the HTTP::version call will simply display the original value, not the changed value, and thus what is then sent to the server should be correct. We were able to verify that indeed is the case.

     

     

    It is looking like the server is ignoring the http 1.0 and still using http 1.1 and the latest thinking is that maybe it is due to the presence of other headers that are http 1.1 specific, which then triggers the server to use http 1.1 anyway. We will experiment and post what we find.

     

     

    Much thanks!

     

     

    //Joe

     

  • Hi Aaron,

     

     

    So the mystery has been solved, but the issue turns out to be completely different than expected. After much experimentation and luck, I was able to get the CreditCardScrubber iRule working against the test chunking site by change the HTTP::collect parameter from the default 4294967295 (4+ gigs) to an apparently more manageable 1000000000 (1 gig).

     

     

    I don't know for sure why, but I suspect that some memory allocation is failing with the larger collect parameter. I went to 3 gig and was able to see intermittent failures to get HTTP_RESPONSE_DATA. I found 2 gig to be very consisten in being successful, but set it at 1 gig to be safe. Unlikely any HTML page will be 1 gig in size in any case.

     

     

    I will post a separate message on this since the subject title assumed a completely different problm.

     

     

    Again, thanks for your help!

     

     

    //Joe