Learn F5 Technologies, Get Answers & Share Community Solutions Join DevCentral

Filter by:
  • Solution
  • Technology
code share

Single Node Persistence

Problem this snippet solves:

A really slick & reliable way to stick to one and only one server in a pool.

Requirement: Direct traffic to only a single node in a pool at a time. Initially, traffic should always go to node A. If Node A fails, then traffic will go to Node B. When Node A comes back online, traffic should continue to go to Node B. When Node B fails, then the traffic should go to Node A.

To send traffic to only 1 pool member at a time, you can use an iRule and Universal Persistence to set a single persistence record that applies to all connections.

  1. Create a virtual server.
  2. Create a pool with the real servers in it.
  3. Create an iRule like this:
  4. Create a Persistence profile of type Universal which uses the iRule you just created. Set the timeout high enough so it will never expire under typical traffic conditions.
  5. In the virtual server definition, apply pool as the default pool, and the new persistence profile as the default persistence profile (both on the virtual server "resources" screen).

The first connection will create a single universal persistence record with a key of "1". All subsequent connections will look up persistence using "1" as the key, resulting in truly universal persistence for all connections. (Use 1 or any constant value. 0 will have the same affect as using 1. One of my customers uses "persist uie TCP__local_port"

When one node fails, the other is persisted to by all comers. When the 2nd node fails, the 1st again becomes the preferred node for all, ad infinitum.

Doesn't offer the capability of manual resume after failure, or true designation of a "primary" and "secondary" instance (sometimes required for db applications), but it sure does solve the problem of "only use one node at a time, I don't care which one, please" (You can use priority to gravitate towards the top of a list...)

Note: Priority-based load balancing with or without dynamic persistence doesn't quite address this requirement. Priority load balancing allows you to set a preferred server to which traffic should return once it recovers. With just Priority, and with dynamic persistence of any kind enabled, when a higher priority nodes come back up after failing, you will see traffic distributed across multiple pool members until old connections/sessions die off. With just Priority and no persistence, existing sessions will break once the preferred node again becomes available.

Comments on this Snippet
Comment made 13-May-2015 by AgungDes 0
Hi Guys, I tried to using this code for the iRules but it still show another error for the iRules The Error Message like This One : 01070151:3: Rule [/Common/single_node] error: /Common/single_node:1: error: [undefined procedure: rule][rule PriorityFailover { when CLIENT_ACCEPTED { persist uie 1 } }]
Comment made 06-Jul-2015 by slouma 0
the same problem please help us if somebody has an idea
Comment made 09-Jul-2015 by CharlesCS 643
Only specify the 2nd line when creating the iRule. The "rule" line and the closing brace are created by the GUI.
Comment made 30-Jan-2016 by Adam Ali 0
is this solution applicable to BIG-IP v10.x?
Comment made 08-Jun-2016 by lenny19 0
very good stuff, worked a treat
Comment made 26-Aug-2016 by JTB 0

This iRule just made my day! Thanks!

Comment made 28-Jan-2017 by Stanislas Piron 10464


Another solution is to use a destination address persistence profile.

As the destination (IP address of the virtual server) is always the same, all requests will use only one pool member.

Comment made 04-Sep-2017 by Michael Gilin

Hi, Using "persist uie 1" iRule is not recommended, since under certain conditions when a chosen pool member/node goes down it may lead to inconsistent persistence entries between TMMs (i.e. different TMMs may end up with persistence entries to different nodes).

If you need to persist to single pool member/node, use destination address persistence profile.

Comment made 17-Oct-2017 by jdeeby 56

I am getting this error when writing the rule.

1: error: [undefined procedure: persist uie 1 ][{ persist uie 1 }]

Here is the syntax when CLIENT_ACCEPTED { { persist uie 1 } }

Comment made 17-Oct-2017 by Stanislas Piron 10464

@jdeeby why didn't you copy / paste the code?

provided code :

    persist uie 1

your code:

    { persist uie 1 }

or as I and Michael commented, use destination address persistence

Comment made 07-Jul-2018 by Daniel Gonzalez Garcia 0

Hi Stanislas, Michael

I understand that with a persistence profile of destination address you'll need to take care of the load balancing method to have the requests going to the same node. Specially of new requests.

I cannot think how it will work out by new requests reaching the LTM which are not in persistence table and in the event one of the nodes comes back online.

By reading Codecentral original post, it is required that in the event of a node coming back online, traffic keeps going to the same node.


Comment made 23-Aug-2018 by mderanek 0

We having been using this for years with no problems. It's based on the VS name. Using uie 1 is not a good idea. Specially if you are using the irule for multiple VSs.

We create a universal persistence profile that calls the irule instead of assigning the irule to the virtual server.

when CLIENT_ACCEPTED { persist uie [virtual name] }

Comment made 07-Sep-2018 by Stephan Manthey 3803

Just use destination address affinity instead, please. It results in a single persistence record applicable to all clients requesting the virtual. The record actually contains the virtual servers IP address (destination address affinity) and will be deleted/replaced in case the mapped pool member fails and a re-selection happens. Finally all traffic sticks to a single pool member as long as it is available. If it fails the persistence record will be replaced with the next incoming connection. This is an alternative to using priority groups. Priority groups may tend to flapping between pool members in case the high-priority member is not stable. Cheers, Stephan

Comment made 3 months ago by k20 66

OK people I still don't know how exactly dst addr persistence or iRule will help. Here's what I believe both methods will fail to deliver.

Scenario 1:

T=0, nodes A and B are both online (where T = time), and persistence table is empty T=1, PC1 and PC2 start to connect. 1 of 4 mappings below could happen as persistence entries are created:

  1. PC1-A and PC2-B
  2. PC1-B and PC2-A
  3. PC1-A and PC2-A
  4. PC1-B and PC2-B

As you can see, we don't want 1 or 2 to happen. Nothing dst addr persistence or iRule could help when persistence entries don't exist yet.

Scenario 2:

T=0, node A online and node B offline, and persistence table is empty. T=1, PC1 and PC2 start to connect. The following mappings will be created in the persistence table:

PC1-A and PC2-A

T=3, node B comes online, PC1, PC2 and PC3 connect. The following mappings could happen:

PC1-A PC2-A PC3-A or PC3-B

As you can see, PC1 and PC2 don't change. However, the new PC3 could go to node A or node B. This will result in some new PC's will go to A or B while the existing PC's will still stick to A due to the existing persistence entries already exist.

Before dst addr persistence or iRule even kicks in, we have to make sure that only ONE node is taking the traffic. How can we accomplish that? Only when this first step is accomplished then dst addr persistence or iRule will help.

Comment made 3 months ago by Stephan Manthey 3803

Hi k20, destination address affinity does not care about your client. The only thing of interest is the destination IP your clients are targeting. And this will be the IP address of your virtual server. Whenever a client is establishing a connection the virtual will establish a persistence table entry containing the virtual servers IP as key and pool member as value. With a new incoming connection (within the persistence timeout) it will lookup the table. The key is the virtual´s IP address and the value exact the same pool member. And this results in selecting the same pool member for all clients. Cheers, Stephan

Comment made 3 months ago by k20 66

Stephan, I understand how persistence works. Did you forget that when T=0, there is nothing in the persistence table and you have both nodes are online, what decision will dst addr persistence make to ensure that all clients will get sent to a single node and not on both? You can't make persistence decision yet because it doesn't exist.

Comment made 3 months ago by Stephan Manthey 3803

That´s true. If there is no persistence record the pool based load balancing method will pick a member. The persistence record will be created and every new connection to this virtual will be balanced to the same pool member now. We are not talking about prioritizing a specific pool member.

Comment made 3 months ago by k20 66

So my question remains, how do we make sure that all connections will get sent to a single node when both nodes are up and no persistence records exist yet for those connections?

Does dynamic priority for pool members exist? So far, I only heard about static priority using Priority Group Activation.

Comment made 3 months ago by Stanislas Piron 10464

Destination address affinity will create one single persistence record : the virtual server address

Destination address affinity uses client side destination address... which is virtual server address for a standard vs

So the first connexion will select pool member, all other connexions will use the same pool member

Comment made 3 months ago by k20 66

I'm sorry you are confusing me. "Destination address affinity uses client side destination address... which is virtual server address for a standard vs" Are you talking about the connection between the F5 and the real servers? If so, it doesn't apply to my environment because we are using SNAT. So the servers in our case can only see the SNAT address, NOT the virtual address.

Comment made 3 months ago by Stanislas Piron 10464

Client side connection is the connection between client and f5!!!

Comment made 3 months ago by Stanislas Piron 10464

Client side connection is the connection between client and f5!!!

Comment made 1 week ago by Dominique Petitpierre 10

Could someone be more explicit about how to configure "Destination address affinity" like suggested by Stephan Manthey? It is not clear to me if one should

  • just replace persist uie 1 by persist dest_addr in the iRule code example above?
  • or configure "Destination address affinity" persistence on the Default Persistence Profile of the virtual server?
  • or still something else?

Also,in a case where the connections are in principle permanent (e.g. to a database master node):

  • should the timeout be unset (Indefinite)?
  • In case a server node is temporarily inaccessible or administratively forced offline, the TCP connections to that node might survive, but during that time new connections could be established with other nodes thus resulting, when the node is accessible again, in a state where there are active connections to more than one node. How can it be avoided ? e.g how to cut all connections to other nodes when a new server node is chosen by the persistence?

  • would a custom destination affinity persistence with a CARP Hash Algorithm work like for source address persistence, i.e. it would always select the same server node when all nodes are available, even for the very first connection (e.g after a reboot)? cf. How Carp algorithm with source address persistence works?

Thanks in advance for your explanations!

Comment made 1 week ago by Stephan Manthey 3803

Hi Dominique, I would recommend to create a new dest address affinity persistence profile with an appropriate timeout. It will be used as default persistence profile and replaces the iRule logic. Please use action on service down in the advanced pool config with the parameter of "reject" to terminate current connections in a case of pool members state change. With a state change the persistence record will be deleted and a new incoming connection will be balanced to another available pool member. A new persistence record will be created. Please keep in mind, that the record will be updated by new incoming connections only. That's why you will notice a reset of the remaining time only with newly established connections. Cheers, Stephan