Forum Discussion

RPM_201817's avatar
RPM_201817
Icon for Nimbostratus rankNimbostratus
Jul 03, 2015

Cloud Deployment - Limited Public facing IPs but hundreds of internal servers which need to be accessed

We have a greenfield private cloud being setup in our single datacentre which has internet connectivity with firewalls at the perimeter, and F5 BIG-IP running GTM, LTM & APM services on a single cluster. We also have BIG-IQ. Virtual Machines (VM) will be created and destroyed in this cloud on demand by our users. Our cloud solution is based on VMWare ESXi, NSX and is controlled through vRealise. When a VM is created, it is assigned IP related information by an Infoblox DDI solution. The Infoblox also operates as the DNS server for the environment. We only have 32 public IP addresses in our external pool, this cannot grow. We will be setting up a new publically visible subdomain for the cloud and will ensure that delegated authority to GTM is configured on the parent DNS.

 

What I need to happen is the following:

 

  1. When a VM is created, Infoblox assigns it a private IP from our huge range of RFC1918 addressing.
  2. When a VM is created, GTM must be aware of the VM hosts existence, and if “up”, respond to internet source DNS requests for the host with a common IP address (a HTTP VIP) for all VMs which we can then redirect on LTM to the correct private address.

This would address our inbound HTTP/S traffic requirement but we’re stumped with inbound traffic that can’t be proxied (like SSH or SFTP). Perhaps there is something clever we can do with split DNS iRules?

 

Any help is appreciated. Thank you.

 

1 Reply

  • You are, I assume, using the Host: header for HTTP traffic to determine to which internal host the traffic should be directed. Is that correct? I further assume, then, that you are using SSL offloading on the BIG-IP (so that you may employ the Host: header for HTTPS traffic, as well). Is that also correct?

     

    If that's the case, for most protocols, there simply is no correlate. That is, the protocol embeds no information for virtualization of hosts. For those, you need some other, generally non-protocol specific means of determining to which internal host the client wishes to go.

     

    If you can divide customers by source netblock, then you could employ an LTM (rather than GTM/DNS, because the host information is not visible to the LTM) iRule to direct traffic appropriately. You could, by this method, employ a Data Group that maps source netblock to customer, then another Data Group to map customer to pool (or, I suppose, one Data Group that maps source netblock to pool).

     

    Does this sound like the sort of thing that might work for your case?