Forum Discussion

invisible's avatar
invisible
Icon for Nimbostratus rankNimbostratus
Feb 04, 2019
Solved

Excessive logging in Local Traffic after upgrading to 14.1 LTM/APM

Hello everybody,

I am observing following phenomenon with our APM/LTM VE boxes after upgrading to 14.1 - every 15 min at the rate of 200 messages per second following (at the bottom) is logged into Local Traffic section of the log.

I did some reading and seems that diskmonitor utility trying to access non-existing disk/partition. mcpd daemon is running so it is not causing troubles. Rebooting does not help.

Any ideas how to check diskmonitor config and edit/disable it?

Thanks

Mon Feb 4 07:11:06 UTC 2019 warning     diskmonitor[28049]  011d0002    Skipping net:[4026541716]. Stat returned message: /usr/bin/stat: cannot read file system information for net:[4026541716]: No such file or directory
Mon Feb 4 07:11:06 UTC 2019 warning     diskmonitor[28052]  011d0002    Skipping net:[4026541778]. Stat returned message: /usr/bin/stat: cannot read file system information for net:[4026541778]: No such file or directory
Mon Feb 4 07:11:06 UTC 2019 warning     diskmonitor[28055]  011d0002    Skipping net:[4026541840]. Stat returned message: /usr/bin/stat: cannot read file system information for net:[4026541840]: No such file or directory
Mon Feb 4 07:11:06 UTC 2019 warning     diskmonitor[28058]  011d0002    Skipping net:[4026541902]. Stat returned message: /usr/bin/stat: cannot read file system information for net:[4026541902]: No such file or directory
Mon Feb 4 07:11:06 UTC 2019 warning     diskmonitor[28061]  011d0002    Skipping net:[4026541964]. Stat returned message: /usr/bin/stat: cannot read file system information for net:[4026541964]: No such file or directory

P.S. Need to mention - APM is provisioned with >100 VLANs and Route Domains.

P.P.S. The solution is provided down in the thread - run the commend in advanced shell to disable logging of that particular command.

  • If you are running 14.x and using route domains, this is a known bug ID760468 and can be safely ignored.

     

    For work around, please review: https://cdn.f5.com/product/bugtracker/ID760468.html

18 Replies

  • I have the same problem although the messages are logged at a lower rate. I have two devices in a cluster , i2600, and both are logging the same message.

     

    Feb 6 07:11:02 ****** warning diskmonitor[29703]: 011d0002:4: Skipping net:[4026532451]. Stat returned message: /usr/bin/stat: cannot read file system information for net: No such file or directory

     

    Feb 6 07:11:02 ****** warning diskmonitor[21331]: 011d0002:4: Skipping net:[4026532631]. Stat returned message: /usr/bin/stat: cannot read file system information for net: No such file or directory

     

  • I have the same problem here. Running on a LTM cluster i2600 with version 14.1

     

    Wed Feb 20 03:51:02 CET 2019 warning ***** diskmonitor[4942] 011d0002 Skipping net:[4026532450]. Stat returned message: /usr/bin/stat: cannot read file system information for net:[4026532450]: No such file or directory Wed Feb 20 03:51:02 CET 2019 warning ***** diskmonitor[4948] 011d0002 Skipping net:[4026532510]. Stat returned message: /usr/bin/stat: cannot read file system information for net:[4026532510]: No such file or directory

     

    Does anyone knows the reason for these warnings? and how can be solved?

     

  • What I see is that while these messages are logged, they do not affect behaviour of the appliance, so I simply ignore them now.

     

  • can you please try to reload mcpd and then reboot F5.

     

    1. Take ucs backup
    2. do mcpd reload touch /service/mcpd/forceload
    3. Reboot F5
    • invisible's avatar
      invisible
      Icon for Nimbostratus rankNimbostratus

      Con't touch the system now, but will appreciate if someone else can test.

       

    • Torti's avatar
      Torti
      Icon for Altostratus rankAltostratus

      reloading mcpd and reboot doesn't help. Message still every minute 3 times

       

  • can you please try to reload mcpd and then reboot F5.

     

    1. Take ucs backup
    2. do mcpd reload touch /service/mcpd/forceload
    3. Reboot F5
    • invisible's avatar
      invisible
      Icon for Nimbostratus rankNimbostratus

      Con't touch the system now, but will appreciate if someone else can test.

       

    • Torti's avatar
      Torti
      Icon for Altostratus rankAltostratus

      reloading mcpd and reboot doesn't help. Message still every minute 3 times

       

  • Hi!

     

    I have the same logs since I upgraded in v14.1.0.1. This is a BIGIP 5250 cluster.

     

    Did you have any feedback from F5 support?

     

    Benjamin

     

    • Benjamin_8557's avatar
      Benjamin_8557
      Icon for Altostratus rankAltostratus

      I notice that system logs every 10 minutes at hh:m1:02.

       

      which process causes these logs?

       

    • invisible's avatar
      invisible
      Icon for Nimbostratus rankNimbostratus

      No response from F5 support. At this moment I decided simply to filter out these log entries from our central logging/monitoring system, but they are obviously logged internally at F5.

       

    • invisible's avatar
      invisible
      Icon for Nimbostratus rankNimbostratus

      None. It is there at v15 code as well.

       

      The only way for me to deal with it is to drop these messages when they arrive at syslog server and run sed on f5 itself to delete them from local logs.

  • If you are running 14.x and using route domains, this is a known bug ID760468 and can be safely ignored.

     

    For work around, please review: https://cdn.f5.com/product/bugtracker/ID760468.html

    • corrado's avatar
      corrado
      Icon for Altostratus rankAltostratus

      ​Thank you very much! Yes we are using routing domains, and the high number of logs was actually a little concerning, it's very nice to have official feedback that everything's ok.

       

      Confirmed solution to my problem! Thanks again!

    • invisible's avatar
      invisible
      Icon for Nimbostratus rankNimbostratus

      Thanks, that's what I was looking for. Yes, we also use RDs.

       

      BTW, I used the second option in the above mentioned solution - I think it is better to eliminate the root cause (writing into log) instead of dealing the consequences (exclude that particular message to be logged).