Persistence, RADIUS servers, and V10 vs. V11
I am seeing some weirdness in the configurations we have for load balancing RADIUS servers in two of our sites. Both sites appear to be working from the clients' perspective. One site is running 10.2.2 HF1 on a 6900 (campus1), and the other is running 11.4.0 HF3 on a VIPRION guest (campus2). In campus1, everything runs smoothly without any issue; in campus2, my /var/log/ltm file fills up constantly with the following messages:
Jul 16 12:54:42 slot2/myltm err tmm[10569]: 01220001:3: TCL error: /Common/radius_persist_simple - Out of bounds (line 1) (line 1) invoked from within "RADIUS::avp 31 "string""
Jul 16 12:54:42 slot2/myltm err tmm[10569]: 01220001:3: TCL error: /Common/radius_persist_simple - Buffer error (line 1) (line 1) invoked from within "RADIUS::avp 31 "string""
The configurations are pretty close other than the version numbers. Here is just the port 1812 config on campus1 (no errors):
tm virtual VS_campus1_ise-1812 {
destination 1.1.1.1:radius
ip-protocol udp
mask 255.255.255.255
persist {
radius_universal {
default yes
}
}
pool pool_ise_udp-1812
profiles {
ise_radius { }
udp-ise { }
}
}
ltm profile radius ise_radius {
defaults-from radiusLB
}
ltm profile udp udp-ise {
datagram-load-balancing enabled
defaults-from udp
}
ltm persistence universal radius_universal {
defaults-from universal
match-across-services enabled
rule radius_persist_simple
timeout 3600
}
ltm rule radius_persist_simple {
when CLIENT_ACCEPTED {
set calling_station_id [RADIUS::avp 31 "string"]
persist uie "$calling_station_id"
}
}
ltm pool pool_ise_udp-1812 {
members {
1.1.2.1:radius {
session monitor-enabled
}
1.1.2.2:radius {
session monitor-enabled
}
1.1.2.3:radius {
session monitor-enabled
}
}
monitor min 1 of { radius-udp-ise }
slow-ramp-time 60
}
Here is just the port 1812 config on campus2 (many errors):
ltm virtual VS_campus2-ise-1812 {
description "RADIUS auth ise"
destination 3.1.1.1:radius
ip-protocol udp
mask 255.255.255.255
persist {
radius_universal {
default yes
}
}
pool pool_ise_udp-1812
profiles {
radius-ise { }
udp-ise { }
}
source 0.0.0.0/0
vs-index 3
}
ltm profile radius radius-ise {
app-service none
defaults-from radiusLB
}
ltm profile udp udp-ise {
app-service none
datagram-load-balancing enabled
defaults-from udp
}
ltm persistence universal radius_universal {
app-service none
defaults-from universal
match-across-services enabled
rule radius_persist_simple
timeout 3600
}
ltm rule radius_persist_simple {
when CLIENT_ACCEPTED {
set calling_station_id [RADIUS::avp 31 "string"]
persist uie "$calling_station_id"
}
}
ltm pool pool_ise_udp-1812 {
members {
server1:radius {
address 3.1.1.2
session monitor-enabled
state up
}
server2:radius {
address 3.1.1.3
session monitor-enabled
state up
}
server3:radius {
address 3.1.1.4
session monitor-enabled
state up
}
server4:radius {
address 3.1.1.5
session monitor-enabled
state up
}
server5:radius {
address 3.1.1.6
session monitor-enabled
state up
}
server6:radius {
address 3.1.1.7
session monitor-enabled
state up
}
}
monitor min 1 of { radius-udp }
slow-ramp-time 60
}
The odd thing is that if I put an irule directly on the virtual server in campus2 (a separate irule, not the one associated with the persistence profile), and that irule invokes the [RADIUS::avp 31 "string"] function, I do not get the errors in my /var/log/ltm file from. They only seem to appear from the irule associated with the persistence profile. Should I remove the irule from my persistence profile in the campus2 configuration and associate it directly with the virtual server? Is there any known difference between 10.2.2 and 11.4.0 with regard to universal persistence handling?
Thanks, Jen