Forum Discussion

Lukasz_01_15307's avatar
Lukasz_01_15307
Icon for Nimbostratus rankNimbostratus
Sep 30, 2016

aws Transit VPC and Pool members in different VPC

Hello,

 

I have an AWS and VPC specific question. I'm trying to deploy multi-VPC environment with a version of a transit VPC (https://aws.amazon.com/answers/networking/transit-vpc/). The basic idea is to have environment per VPC - DEV, UAT, PRE-PROD and PROD that connect to the internet using the TRANSIT VPC.

 

Right now I have a TRANSIT VPC with f5 connected to 3 subnets - external, internal and management. I have another VPC (UAT) with a web server that I want to present to the internet using the f5. So basically I have a node in a different VPC. I have peer connections between both VPCs and Routing table set up to, in theory, allow f5 to communicate with the node... however, when I add the node to f5 and create a new pool with ICMP it's failing health checks... I double checked all aws routeing tables and security groups and all rules are set up correctly...

 

Any ideas? what should I double check, or what am I missing? Is there any additional f5 setup required for this to work?

 

Thanks Lukasz

 

3 Replies

  • without knowing the full config of your setup other things to look at are

     

    (a) routing table for each subnet in each VPC, does it know have a route to the IP address of the node in the other VPC (b) routing table on the F5, add a route for the subnet on the other VPC using either the external or internal interface. for the subnet this interface is in check the AWS routing table has a route to the node in the other VPC as per item (a) above.

     

    it makes no difference if the node you are trying to run the ICMP health check on is not in the internal or external interface on the F5. this is probably a routing or network access issue.

     

  • I managed to get this configuration to work, however I needed to route the traffic that was destined for the peer VPC via the .1 address of the entire VPC. If it was routed to the first IP of the individual subnet the traffic would not traverse the peering connection. It was a pain as it just so happened that my first subnet was being used for the management interface so the F5 wouldn't route the traffic over that address and complained if I added a route as it wasn't directly connected. Once I changed the management network to another subnet so I could route the peer VPC range to the VPC .1 address it all worked.

     

    I was able to health check nodes in the peered VPC and route traffic to them via a VIP.

     

    I've logged an AWS support ticket for clarification, hope that helps.

     

    • buzzy101_12743's avatar
      buzzy101_12743
      Icon for Nimbostratus rankNimbostratus

      Just to be clear if your VPC is 10.0.0.0/24 you would need to route the traffic to the peer via 10.0.0.1 by adding this static route to the F5