Hello List.
I’m having the next issue. I have a kamailio v3.2 working with a Radius server as a backend. All the INVITE’s coming to the kamailio server are challenged against the Radius server and then avp_load is called from Radius server too. I’m having excessive delay times when I have near 10 call per second. So I find it very weird. I have made a benchmark for the avp_load :
bm_start_timer("timer-tranum");
if( !radius_load_caller_avps("$fU@$fd")) {
sl_send_reply("403","Forbidden - Estamos experimentando problemas");
exit;
} else {
xlog("L_INFO", "[$ci]:[AUTH_REQUEST]: ok"); };
bm_log_timer("timer-tranum");
and this is the output :
Sep 12 18:26:07 benchmark (timer timer-tranum [1]): 112441 [ msgs/total/min/max/avg - LR: 100/18446744073695545751/23297/18446744073709236247/184467440736955456.000000 | GB: 2100/18446744073708471097/10746/18446744073709542058/8784163844623081.000000]
Sep 12 18:26:55 benchmark (timer timer-tranum [1]): 280543 [ msgs/total/min/max/avg - LR: 100/10238598/21317/18446744073708596212/102385.980000 | GB: 2200/9158079/10746/18446744073709542058/4162.763182]
Sep 12 18:28:20 benchmark (timer timer-tranum [1]): 312902 [ msgs/total/min/max/avg - LR: 100/5048656/15491/18446744073709546022/50486.560000 | GB: 2300/14206735/10746/18446744073709546022/6176.841304]
Sep 12 18:28:31 benchmark (timer timer-tranum [1]): 301818 [ msgs/total/min/max/avg - LR: 100/5622716/43785/18446744073708810522/56227.160000 | GB: 2400/19829451/10746/18446744073709546022/8262.271250]
Sep 12 18:28:44 benchmark (timer timer-tranum [1]): 18446744073708689124 [ msgs/total/min/max/avg - LR: 100/1329199/59661/18446744073709167366/13291.990000 | GB: 2500/21158650/10746/18446744073709546022/8463.460000]
Sep 12 18:29:26 benchmark (timer timer-tranum [1]): 44160 [ msgs/total/min/max/avg - LR: 100/18446744073701320457/19576/18446744073708922050/184467440737013216.000000 | GB: 2600/12927491/10746/18446744073709546022/4972.111923]
Sep 12 18:30:18 benchmark (timer timer-tranum [1]): 218573 [ msgs/total/min/max/avg - LR: 100/18446744073707306935/17459/18446744073708876520/184467440737073056.000000 | GB: 2700/10682810/10746/18446744073709546022/3956.596296]
Sep 12 18:31:06 benchmark (timer timer-tranum [1]): 132751 [ msgs/total/min/max/avg - LR: 100/9611955/17057/18446744073708726431/96119.550000 | GB: 2800/20294765/10746/18446744073709546022/7248.130357]
Sep 12 18:31:55 benchmark (timer timer-tranum [1]): 18859 [ msgs/total/min/max/avg - LR: 100/18446744073705270953/18859/18446744073708747515/184467440737052704.000000 | GB: 2900/16014102/10746/18446744073709546022/5522.104138]
Sep 12 18:32:48 benchmark (timer timer-tranum [1]): 61954 [ msgs/total/min/max/avg - LR: 100/7124234/18036/18446744073708596768/71242.340000 | GB: 3000/23138336/10746/18446744073709546022/7712.778667]
Sep 12 18:33:39 benchmark (timer timer-tranum [1]): 53489 [ msgs/total/min/max/avg - LR: 100/18446744073707180114/18177/18446744073708790040/184467440737071808.000000 | GB: 3100/20766834/10746/18446744073709546022/6698.978710]
Sep 12 18:34:35 benchmark (timer timer-tranum [1]): 60604 [ msgs/total/min/max/avg - LR: 100/18446744073701108277/33983/18446744073708839351/184467440737011072.000000 | GB: 3200/12323495/10746/18446744073709546022/3851.092187]
Sep 12 18:35:24 benchmark (timer timer-tranum [1]): 54705 [ msgs/total/min/max/avg - LR: 100/7185862/15880/18446744073708608536/71858.620000 | GB: 3300/19509357/10746/18446744073709546022/5911.926364]
Sep 12 18:36:21 benchmark (timer timer-tranum [1]): 60356 [ msgs/total/min/max/avg - LR: 100/18446744073705699677/19466/18446744073708844524/184467440737056992.000000 | GB: 3400/15657418/10746/18446744073709546022/4605.122941]
Sep 12 18:37:14 benchmark (timer timer-tranum [1]): 18446744073708679441 [ msgs/total/min/max/avg - LR: 100/3506212/16102/18446744073708791471/35062.120000 | GB: 3500/19163630/10746/18446744073709546022/5475.322857]
Sep 12 18:38:07 benchmark (timer timer-tranum [1]): 93933 [ msgs/total/min/max/avg - LR: 100/18446744073704680760/17569/18446744073708741284/184467440737046816.000000 | GB: 3600/14292774/10746/18446744073709546022/3970.215000]
Sep 12 18:38:58 benchmark (timer timer-tranum [1]): 142777 [ msgs/total/min/max/avg - LR: 100/681953/15933/18446744073708708694/6819.530000 | GB: 3700/14974727/10746/18446744073709546022/4047.223514]
Sep 12 18:39:44 benchmark (timer timer-tranum [1]): 165921 [ msgs/total/min/max/avg - LR: 100/5153736/15749/18446744073708657622/51537.360000 | GB: 3800/20128463/10746/18446744073709546022/5296.963947]
Sep 12 18:40:31 benchmark (timer timer-tranum [1]): 127615 [ msgs/total/min/max/avg - LR: 100/18446744073709161716/20690/18446744073708793559/184467440737091616.000000 | GB: 3900/19738563/10746/18446744073709546022/5061.170000]
Sep 12 18:41:21 benchmark (timer timer-tranum [1]): 47762 [ msgs/total/min/max/avg - LR: 100/9001175/18909/18446744073708632234/90011.750000 | GB: 4000/28739738/10746/18446744073709546022/7184.934500]
Sep 12 18:42:11 benchmark (timer timer-tranum [1]): 62944 [ msgs/total/min/max/avg - LR: 100/18446744073708163674/17934/18446744073708825788/184467440737081632.000000 | GB: 4100/27351796/10746/18446744073709546022/6671.169756]
If the output is in microseconds and I have from time to time (for the last 100 messages) values like 184467440737081632.000000 microsecs
IS THIS NORMAL?
Then it backs to a more rational value : 90011.750000 microsecs.
I also made measures in my Radius Server and is answering with average time around 0.01 secs.
So… what could it be the problem?.. Could be the RadiusClient-ng ?
In my attempt to solve the issue I increased the childrens for the kamailio server from 16 to 60, but the problem persists.
Can someone help me here?
Best Regards,
Ricardo Martinez.-
Hello.
Could it be that this number maybe represent a failed connection to the radius server?
Any help please?
Regards,
Ricardo.-
*De:* Ricardo Martinez [mailto:rmartinez@redvoiss.net] *Enviado el:* miércoles, 12 de septiembre de 2012 22:43 *Para:* sr-users@lists.sip-router.org *Asunto:* avp_radius load time high
Hello List.
I’m having the next issue. I have a kamailio v3.2 working with a Radius server as a backend. All the INVITE’s coming to the kamailio server are challenged against the Radius server and then avp_load is called from Radius server too. I’m having excessive delay times when I have near 10 call per second. So I find it very weird. I have made a benchmark for the avp_load :
bm_start_timer("timer-tranum");
if( !radius_load_caller_avps("$fU@$fd")) {
sl_send_reply("403","Forbidden - Estamos experimentando problemas");
exit;
} else {
xlog("L_INFO", "[$ci]:[AUTH_REQUEST]: ok");
};
bm_log_timer("timer-tranum");
and this is the output :
Sep 12 18:26:07 benchmark (timer timer-tranum [1]): 112441 [ msgs/total/min/max/avg - LR: 100/18446744073695545751/23297/18446744073709236247/184467440736955456.000000 | GB: 2100/18446744073708471097/10746/18446744073709542058/8784163844623081.000000]
Sep 12 18:26:55 benchmark (timer timer-tranum [1]): 280543 [ msgs/total/min/max/avg - LR: 100/10238598/21317/18446744073708596212/102385.980000 | GB: 2200/9158079/10746/18446744073709542058/4162.763182]
Sep 12 18:28:20 benchmark (timer timer-tranum [1]): 312902 [ msgs/total/min/max/avg - LR: 100/5048656/15491/18446744073709546022/50486.560000 | GB: 2300/14206735/10746/18446744073709546022/6176.841304]
Sep 12 18:28:31 benchmark (timer timer-tranum [1]): 301818 [ msgs/total/min/max/avg - LR: 100/5622716/43785/18446744073708810522/56227.160000 | GB: 2400/19829451/10746/18446744073709546022/8262.271250]
Sep 12 18:28:44 benchmark (timer timer-tranum [1]): 18446744073708689124 [ msgs/total/min/max/avg - LR: 100/1329199/59661/18446744073709167366/13291.990000 | GB: 2500/21158650/10746/18446744073709546022/8463.460000]
Sep 12 18:29:26 benchmark (timer timer-tranum [1]): 44160 [ msgs/total/min/max/avg - LR: 100/18446744073701320457/19576/18446744073708922050/184467440737013216.000000 | GB: 2600/12927491/10746/18446744073709546022/4972.111923]
Sep 12 18:30:18 benchmark (timer timer-tranum [1]): 218573 [ msgs/total/min/max/avg - LR: 100/18446744073707306935/17459/18446744073708876520/184467440737073056.000000 | GB: 2700/10682810/10746/18446744073709546022/3956.596296]
Sep 12 18:31:06 benchmark (timer timer-tranum [1]): 132751 [ msgs/total/min/max/avg - LR: 100/9611955/17057/18446744073708726431/96119.550000 | GB: 2800/20294765/10746/18446744073709546022/7248.130357]
Sep 12 18:31:55 benchmark (timer timer-tranum [1]): 18859 [ msgs/total/min/max/avg - LR: 100/18446744073705270953/18859/18446744073708747515/184467440737052704.000000 | GB: 2900/16014102/10746/18446744073709546022/5522.104138]
Sep 12 18:32:48 benchmark (timer timer-tranum [1]): 61954 [ msgs/total/min/max/avg - LR: 100/7124234/18036/18446744073708596768/71242.340000 | GB: 3000/23138336/10746/18446744073709546022/7712.778667]
Sep 12 18:33:39 benchmark (timer timer-tranum [1]): 53489 [ msgs/total/min/max/avg - LR: 100/18446744073707180114/18177/18446744073708790040/184467440737071808.000000 | GB: 3100/20766834/10746/18446744073709546022/6698.978710]
Sep 12 18:34:35 benchmark (timer timer-tranum [1]): 60604 [ msgs/total/min/max/avg - LR: 100/18446744073701108277/33983/18446744073708839351/184467440737011072.000000 | GB: 3200/12323495/10746/18446744073709546022/3851.092187]
Sep 12 18:35:24 benchmark (timer timer-tranum [1]): 54705 [ msgs/total/min/max/avg - LR: 100/7185862/15880/18446744073708608536/71858.620000 | GB: 3300/19509357/10746/18446744073709546022/5911.926364]
Sep 12 18:36:21 benchmark (timer timer-tranum [1]): 60356 [ msgs/total/min/max/avg - LR: 100/18446744073705699677/19466/18446744073708844524/184467440737056992.000000 | GB: 3400/15657418/10746/18446744073709546022/4605.122941]
Sep 12 18:37:14 benchmark (timer timer-tranum [1]): 18446744073708679441 [ msgs/total/min/max/avg - LR: 100/3506212/16102/18446744073708791471/35062.120000 | GB: 3500/19163630/10746/18446744073709546022/5475.322857]
Sep 12 18:38:07 benchmark (timer timer-tranum [1]): 93933 [ msgs/total/min/max/avg - LR: 100/18446744073704680760/17569/18446744073708741284/184467440737046816.000000 | GB: 3600/14292774/10746/18446744073709546022/3970.215000]
Sep 12 18:38:58 benchmark (timer timer-tranum [1]): 142777 [ msgs/total/min/max/avg - LR: 100/681953/15933/18446744073708708694/6819.530000 | GB: 3700/14974727/10746/18446744073709546022/4047.223514]
Sep 12 18:39:44 benchmark (timer timer-tranum [1]): 165921 [ msgs/total/min/max/avg - LR: 100/5153736/15749/18446744073708657622/51537.360000 | GB: 3800/20128463/10746/18446744073709546022/5296.963947]
Sep 12 18:40:31 benchmark (timer timer-tranum [1]): 127615 [ msgs/total/min/max/avg - LR: 100/18446744073709161716/20690/18446744073708793559/184467440737091616.000000 | GB: 3900/19738563/10746/18446744073709546022/5061.170000]
Sep 12 18:41:21 benchmark (timer timer-tranum [1]): 47762 [ msgs/total/min/max/avg - LR: 100/9001175/18909/18446744073708632234/90011.750000 | GB: 4000/28739738/10746/18446744073709546022/7184.934500]
Sep 12 18:42:11 benchmark (timer timer-tranum [1]): 62944 [ msgs/total/min/max/avg - LR: 100/18446744073708163674/17934/18446744073708825788/184467440737081632.000000 | GB: 4100/27351796/10746/18446744073709546022/6671.169756]
If the output is in microseconds and I have from time to time (for the last 100 messages) values like 184467440737081632.000000 microsecs
IS THIS NORMAL?
Then it backs to a more rational value : 90011.750000 microsecs.
I also made measures in my Radius Server and is answering with average time around 0.01 secs.
So… what could it be the problem?.. Could be the RadiusClient-ng ?
In my attempt to solve the issue I increased the childrens for the kamailio server from 16 to 60, but the problem persists.
Can someone help me here?
Best Regards,
Ricardo Martinez.-
Hello,
On 9/13/12 9:22 PM, Ricardo Martinez wrote:
Hello.
Could it be that this number maybe represent a failed connection to the radius server?
I am not using radius myself, but I would try some generic troubleshooting: - watch the network traffic between with the radius server to see if there are delays between the requests and responses - if you do also user authentication with radius, do you notice same problems? - if you run with debug 3, can you spot the delays in debug messages? Maybe you can patch the radius module to print more log messages around the code that sends the radius requests and waits for response
Cheers, Daniel
Any help please?
Regards,
Ricardo.-
*De:*Ricardo Martinez [mailto:rmartinez@redvoiss.net mailto:rmartinez@redvoiss.net] *Enviado el:* miércoles, 12 de septiembre de 2012 22:43 *Para:* sr-users@lists.sip-router.org mailto:sr-users@lists.sip-router.org *Asunto:* avp_radius load time high
Hello List. I'm having the next issue. I have a kamailio v3.2 working with a Radius server as a backend. All the INVITE's coming to the kamailio server are challenged against the Radius server and then avp_load is called from Radius server too. I'm having excessive delay times when I have near 10 call per second. So I find it very weird. I have made a benchmark for the avp_load : bm_start_timer("timer-tranum"); if( !radius_load_caller_avps("$fU@$fd")) { sl_send_reply("403","Forbidden - Estamos experimentando problemas"); exit; } else { xlog("L_INFO", "[$ci]:[AUTH_REQUEST]: ok"); }; bm_log_timer("timer-tranum"); and this is the output : Sep 12 18:26:07 benchmark (timer timer-tranum [1]): 112441 [ msgs/total/min/max/avg - LR: 100/18446744073695545751/23297/18446744073709236247/184467440736955456.000000 | GB: 2100/18446744073708471097/10746/18446744073709542058/8784163844623081.000000] Sep 12 18:26:55 benchmark (timer timer-tranum [1]): 280543 [ msgs/total/min/max/avg - LR: 100/10238598/21317/18446744073708596212/102385.980000 | GB: 2200/9158079/10746/18446744073709542058/4162.763182] Sep 12 18:28:20 benchmark (timer timer-tranum [1]): 312902 [ msgs/total/min/max/avg - LR: 100/5048656/15491/18446744073709546022/50486.560000 | GB: 2300/14206735/10746/18446744073709546022/6176.841304] Sep 12 18:28:31 benchmark (timer timer-tranum [1]): 301818 [ msgs/total/min/max/avg - LR: 100/5622716/43785/18446744073708810522/56227.160000 | GB: 2400/19829451/10746/18446744073709546022/8262.271250] Sep 12 18:28:44 benchmark (timer timer-tranum [1]): 18446744073708689124 [ msgs/total/min/max/avg - LR: 100/1329199/59661/18446744073709167366/13291.990000 | GB: 2500/21158650/10746/18446744073709546022/8463.460000] Sep 12 18:29:26 benchmark (timer timer-tranum [1]): 44160 [ msgs/total/min/max/avg - LR: 100/18446744073701320457/19576/18446744073708922050/184467440737013216.000000 | GB: 2600/12927491/10746/18446744073709546022/4972.111923] Sep 12 18:30:18 benchmark (timer timer-tranum [1]): 218573 [ msgs/total/min/max/avg - LR: 100/18446744073707306935/17459/18446744073708876520/184467440737073056.000000 | GB: 2700/10682810/10746/18446744073709546022/3956.596296] Sep 12 18:31:06 benchmark (timer timer-tranum [1]): 132751 [ msgs/total/min/max/avg - LR: 100/9611955/17057/18446744073708726431/96119.550000 | GB: 2800/20294765/10746/18446744073709546022/7248.130357] Sep 12 18:31:55 benchmark (timer timer-tranum [1]): 18859 [ msgs/total/min/max/avg - LR: 100/18446744073705270953/18859/18446744073708747515/184467440737052704.000000 | GB: 2900/16014102/10746/18446744073709546022/5522.104138] Sep 12 18:32:48 benchmark (timer timer-tranum [1]): 61954 [ msgs/total/min/max/avg - LR: 100/7124234/18036/18446744073708596768/71242.340000 | GB: 3000/23138336/10746/18446744073709546022/7712.778667] Sep 12 18:33:39 benchmark (timer timer-tranum [1]): 53489 [ msgs/total/min/max/avg - LR: 100/18446744073707180114/18177/18446744073708790040/184467440737071808.000000 | GB: 3100/20766834/10746/18446744073709546022/6698.978710] Sep 12 18:34:35 benchmark (timer timer-tranum [1]): 60604 [ msgs/total/min/max/avg - LR: 100/18446744073701108277/33983/18446744073708839351/184467440737011072.000000 | GB: 3200/12323495/10746/18446744073709546022/3851.092187] Sep 12 18:35:24 benchmark (timer timer-tranum [1]): 54705 [ msgs/total/min/max/avg - LR: 100/7185862/15880/18446744073708608536/71858.620000 | GB: 3300/19509357/10746/18446744073709546022/5911.926364] Sep 12 18:36:21 benchmark (timer timer-tranum [1]): 60356 [ msgs/total/min/max/avg - LR: 100/18446744073705699677/19466/18446744073708844524/184467440737056992.000000 | GB: 3400/15657418/10746/18446744073709546022/4605.122941] Sep 12 18:37:14 benchmark (timer timer-tranum [1]): 18446744073708679441 [ msgs/total/min/max/avg - LR: 100/3506212/16102/18446744073708791471/35062.120000 | GB: 3500/19163630/10746/18446744073709546022/5475.322857] Sep 12 18:38:07 benchmark (timer timer-tranum [1]): 93933 [ msgs/total/min/max/avg - LR: 100/18446744073704680760/17569/18446744073708741284/184467440737046816.000000 | GB: 3600/14292774/10746/18446744073709546022/3970.215000] Sep 12 18:38:58 benchmark (timer timer-tranum [1]): 142777 [ msgs/total/min/max/avg - LR: 100/681953/15933/18446744073708708694/6819.530000 | GB: 3700/14974727/10746/18446744073709546022/4047.223514] Sep 12 18:39:44 benchmark (timer timer-tranum [1]): 165921 [ msgs/total/min/max/avg - LR: 100/5153736/15749/18446744073708657622/51537.360000 | GB: 3800/20128463/10746/18446744073709546022/5296.963947] Sep 12 18:40:31 benchmark (timer timer-tranum [1]): 127615 [ msgs/total/min/max/avg - LR: 100/18446744073709161716/20690/18446744073708793559/184467440737091616.000000 | GB: 3900/19738563/10746/18446744073709546022/5061.170000] Sep 12 18:41:21 benchmark (timer timer-tranum [1]): 47762 [ msgs/total/min/max/avg - LR: 100/9001175/18909/18446744073708632234/90011.750000 | GB: 4000/28739738/10746/18446744073709546022/7184.934500] Sep 12 18:42:11 benchmark (timer timer-tranum [1]): 62944 [ msgs/total/min/max/avg - LR: 100/18446744073708163674/17934/18446744073708825788/184467440737081632.000000 | GB: 4100/27351796/10746/18446744073709546022/6671.169756] If the output is in microseconds and I have from time to time (for the last 100 messages) values like 184467440737081632.000000 microsecs IS THIS NORMAL? Then it backs to a more rational value : 90011.750000 microsecs. I also made measures in my Radius Server and is answering with average time around 0.01 secs. So... what could it be the problem?.. Could be the RadiusClient-ng ? In my attempt to solve the issue I increased the childrens for the kamailio server from 16 to 60, but the problem persists. Can someone help me here? Best Regards, Ricardo Martinez.-
SIP Express Router (SER) and Kamailio (OpenSER) - sr-users mailing list sr-users@lists.sip-router.org http://lists.sip-router.org/cgi-bin/mailman/listinfo/sr-users
Hi Daniel.
THanks for your answer. Can you guide where to patch the radius module. I’ve been looking the module misc_radius but I can’t find where to put more debug…..
Thanks again.
Ricardo.-
*De:* Daniel-Constantin Mierla [mailto:miconda@gmail.com] *Enviado el:* jueves, 13 de septiembre de 2012 16:19 *Para:* SIP Router - Kamailio (OpenSER) and SIP Express Router (SER) - Users Mailing List *CC:* Ricardo Martinez *Asunto:* Re: [SR-Users] avp_radius load time high
Hello,
On 9/13/12 9:22 PM, Ricardo Martinez wrote:
Hello.
Could it be that this number maybe represent a failed connection to the radius server?
I am not using radius myself, but I would try some generic troubleshooting: - watch the network traffic between with the radius server to see if there are delays between the requests and responses - if you do also user authentication with radius, do you notice same problems? - if you run with debug 3, can you spot the delays in debug messages? Maybe you can patch the radius module to print more log messages around the code that sends the radius requests and waits for response
Cheers, Daniel
Any help please?
Regards,
Ricardo.-
*De:* Ricardo Martinez [mailto:rmartinez@redvoiss.net] *Enviado el:* miércoles, 12 de septiembre de 2012 22:43 *Para:* sr-users@lists.sip-router.org *Asunto:* avp_radius load time high
Hello List.
I’m having the next issue. I have a kamailio v3.2 working with a Radius server as a backend. All the INVITE’s coming to the kamailio server are challenged against the Radius server and then avp_load is called from Radius server too. I’m having excessive delay times when I have near 10 call per second. So I find it very weird. I have made a benchmark for the avp_load :
bm_start_timer("timer-tranum");
if( !radius_load_caller_avps("$fU@$fd")) {
sl_send_reply("403","Forbidden - Estamos experimentando problemas");
exit;
} else {
xlog("L_INFO", "[$ci]:[AUTH_REQUEST]: ok");
};
bm_log_timer("timer-tranum");
and this is the output :
Sep 12 18:26:07 benchmark (timer timer-tranum [1]): 112441 [ msgs/total/min/max/avg - LR: 100/18446744073695545751/23297/18446744073709236247/184467440736955456.000000 | GB: 2100/18446744073708471097/10746/18446744073709542058/8784163844623081.000000]
Sep 12 18:26:55 benchmark (timer timer-tranum [1]): 280543 [ msgs/total/min/max/avg - LR: 100/10238598/21317/18446744073708596212/102385.980000 | GB: 2200/9158079/10746/18446744073709542058/4162.763182]
Sep 12 18:28:20 benchmark (timer timer-tranum [1]): 312902 [ msgs/total/min/max/avg - LR: 100/5048656/15491/18446744073709546022/50486.560000 | GB: 2300/14206735/10746/18446744073709546022/6176.841304]
Sep 12 18:28:31 benchmark (timer timer-tranum [1]): 301818 [ msgs/total/min/max/avg - LR: 100/5622716/43785/18446744073708810522/56227.160000 | GB: 2400/19829451/10746/18446744073709546022/8262.271250]
Sep 12 18:28:44 benchmark (timer timer-tranum [1]): 18446744073708689124 [ msgs/total/min/max/avg - LR: 100/1329199/59661/18446744073709167366/13291.990000 | GB: 2500/21158650/10746/18446744073709546022/8463.460000]
Sep 12 18:29:26 benchmark (timer timer-tranum [1]): 44160 [ msgs/total/min/max/avg - LR: 100/18446744073701320457/19576/18446744073708922050/184467440737013216.000000 | GB: 2600/12927491/10746/18446744073709546022/4972.111923]
Sep 12 18:30:18 benchmark (timer timer-tranum [1]): 218573 [ msgs/total/min/max/avg - LR: 100/18446744073707306935/17459/18446744073708876520/184467440737073056.000000 | GB: 2700/10682810/10746/18446744073709546022/3956.596296]
Sep 12 18:31:06 benchmark (timer timer-tranum [1]): 132751 [ msgs/total/min/max/avg - LR: 100/9611955/17057/18446744073708726431/96119.550000 | GB: 2800/20294765/10746/18446744073709546022/7248.130357]
Sep 12 18:31:55 benchmark (timer timer-tranum [1]): 18859 [ msgs/total/min/max/avg - LR: 100/18446744073705270953/18859/18446744073708747515/184467440737052704.000000 | GB: 2900/16014102/10746/18446744073709546022/5522.104138]
Sep 12 18:32:48 benchmark (timer timer-tranum [1]): 61954 [ msgs/total/min/max/avg - LR: 100/7124234/18036/18446744073708596768/71242.340000 | GB: 3000/23138336/10746/18446744073709546022/7712.778667]
Sep 12 18:33:39 benchmark (timer timer-tranum [1]): 53489 [ msgs/total/min/max/avg - LR: 100/18446744073707180114/18177/18446744073708790040/184467440737071808.000000 | GB: 3100/20766834/10746/18446744073709546022/6698.978710]
Sep 12 18:34:35 benchmark (timer timer-tranum [1]): 60604 [ msgs/total/min/max/avg - LR: 100/18446744073701108277/33983/18446744073708839351/184467440737011072.000000 | GB: 3200/12323495/10746/18446744073709546022/3851.092187]
Sep 12 18:35:24 benchmark (timer timer-tranum [1]): 54705 [ msgs/total/min/max/avg - LR: 100/7185862/15880/18446744073708608536/71858.620000 | GB: 3300/19509357/10746/18446744073709546022/5911.926364]
Sep 12 18:36:21 benchmark (timer timer-tranum [1]): 60356 [ msgs/total/min/max/avg - LR: 100/18446744073705699677/19466/18446744073708844524/184467440737056992.000000 | GB: 3400/15657418/10746/18446744073709546022/4605.122941]
Sep 12 18:37:14 benchmark (timer timer-tranum [1]): 18446744073708679441 [ msgs/total/min/max/avg - LR: 100/3506212/16102/18446744073708791471/35062.120000 | GB: 3500/19163630/10746/18446744073709546022/5475.322857]
Sep 12 18:38:07 benchmark (timer timer-tranum [1]): 93933 [ msgs/total/min/max/avg - LR: 100/18446744073704680760/17569/18446744073708741284/184467440737046816.000000 | GB: 3600/14292774/10746/18446744073709546022/3970.215000]
Sep 12 18:38:58 benchmark (timer timer-tranum [1]): 142777 [ msgs/total/min/max/avg - LR: 100/681953/15933/18446744073708708694/6819.530000 | GB: 3700/14974727/10746/18446744073709546022/4047.223514]
Sep 12 18:39:44 benchmark (timer timer-tranum [1]): 165921 [ msgs/total/min/max/avg - LR: 100/5153736/15749/18446744073708657622/51537.360000 | GB: 3800/20128463/10746/18446744073709546022/5296.963947]
Sep 12 18:40:31 benchmark (timer timer-tranum [1]): 127615 [ msgs/total/min/max/avg - LR: 100/18446744073709161716/20690/18446744073708793559/184467440737091616.000000 | GB: 3900/19738563/10746/18446744073709546022/5061.170000]
Sep 12 18:41:21 benchmark (timer timer-tranum [1]): 47762 [ msgs/total/min/max/avg - LR: 100/9001175/18909/18446744073708632234/90011.750000 | GB: 4000/28739738/10746/18446744073709546022/7184.934500]
Sep 12 18:42:11 benchmark (timer timer-tranum [1]): 62944 [ msgs/total/min/max/avg - LR: 100/18446744073708163674/17934/18446744073708825788/184467440737081632.000000 | GB: 4100/27351796/10746/18446744073709546022/6671.169756]
If the output is in microseconds and I have from time to time (for the last 100 messages) values like 184467440737081632.000000 microsecs
IS THIS NORMAL?
Then it backs to a more rational value : 90011.750000 microsecs.
I also made measures in my Radius Server and is answering with average time around 0.01 secs.
So… what could it be the problem?.. Could be the RadiusClient-ng ?
In my attempt to solve the issue I increased the childrens for the kamailio server from 16 to 60, but the problem persists.
Can someone help me here?
Best Regards,
Ricardo Martinez.-
_______________________________________________
SIP Express Router (SER) and Kamailio (OpenSER) - sr-users mailing list
sr-users@lists.sip-router.org
http://lists.sip-router.org/cgi-bin/mailman/listinfo/sr-users
if you are using freeradius server, then check that your sql module configurations have enough sql connections to server:
# number of sql connections to make to server num_sql_socks = xx
-- juha
Hi Juha. I'm using another Radius server (Radiator) with a ORACLE database. Is there a way to add debug or information in the radiusclient-ng to see if this is the bottleneck?
Thanks. Ricardo.-
-----Mensaje original----- De: sr-users-bounces@lists.sip-router.org [mailto:sr-users-bounces@lists.sip-router.org] En nombre de Juha Heinanen Enviado el: viernes, 14 de septiembre de 2012 10:09 Para: SIP Router - Kamailio (OpenSER) and SIP Express Router (SER) - Users Mailing List Asunto: Re: [SR-Users] avp_radius load time high
if you are using freeradius server, then check that your sql module configurations have enough sql connections to server:
# number of sql connections to make to server num_sql_socks = xx
-- juha
_______________________________________________ SIP Express Router (SER) and Kamailio (OpenSER) - sr-users mailing list sr-users@lists.sip-router.org http://lists.sip-router.org/cgi-bin/mailman/listinfo/sr-users
Ricardo Martinez writes:
I'm using another Radius server (Radiator) with a ORACLE database. Is there a way to add debug or information in the radiusclient-ng to see if this is the bottleneck?
i don't know about debuging radiusclient-ng, but you can check with wireshark, how long it takes to reply the queries and if radiusclient needs to resend them. if there are no resends and repies come all back fast, then the problem may be in kamailio, but i doubt it, since i used to use radius for authentication and other stuff for many years without any performance problems.
-- juha
in radiusclient.conf, use ip addresses instead of host names as authserver and acctserver values.
-- juha