It's not surprising that, despite not being able to resolve the ARP entry, the LTM can reply to pings. LTM uses auto-lasthop by default, which will cause all reply packets to go back to the same source MAC from which the previous packet in a connection was received, whether or not that device would normally be part of the L3 route. In effect, LTM sees the echo request come from L2 address abc, so it sends the response back to L2 address abc.
Do you have other vCMP guests on that chassis to test against? (You won't be able to ping the hypervisor addresses on those vlans, even if you have assigned them.) If you do, are they having any issues reaching the FW?
When trying to ping from the LTM, do you see the ARP requests being received on the FW? If you can't capture there, can you see them if you tcpdump on the Hypervisor (not the guest)?
I have seen some devices (the L2/3 switch in my lab) use the same link in the LAG for all packets that it cannot hash using it's default mechanism. I don't know if F5 does this or not, but if so, is it possible that the link it's using to send the ARPs through the LAG is dropping them? For example, if the 6500 hashes the echo request to LAG member 3, and ltm hashes the echo response to member 4 all would be well. But if the LTM used member 2 for the ARP requests, and member 2 is dropping short packets (or whatever), the effect would be what you describe.
Another potential issue would be the vCMP guest blade assignment. By that I mean, if the guest is provisioned for "all-slots", but the Hypervisor has slot 3 disabled, the guest could be trying to send the ARP request to a link on blade 3, not realizing that the blade is disabled at the hypervisor level. (Yes, you'd expect the guest to detect the status of the physical blade and disable the virtual blade to match, but such is not the case in my experience.) This scenario could also benefit from the auto-lasthop feature because the Hypervisor blade that receives and forwards the echo request to the guest would not use any disabled blades in the chassis, so the guest would only receive the request on a cluster member residing on a non-disabled hypervisor blade.
I realize that my description of the second scenario may be a bit difficult to follow, especially if you're not entirely familiar with how the guest sees itself. The short version is that, if a hypervisor blade is disabled, you'll want to disable the corresponding blade within the guest to be sure the LTM doesn't try to use it (and the virtual network links associated with it).
Hope this helps,
--jesse