Hello,
I’m using Kamailio 5.5.2 with UDP sockets, configured with 32 children processes.
During load tests, some packets seem to be dropped. I’m trying to figure out why, and how to solve (or mitigate) the issue.
From what I understand :
« the SIP messages send on UDP/SCTP are received directly from the buffer in kernel one by one, each being processed once read. »
(this is a message from Daniel-Constantin Mierla, posted on the mailing list back in 2014 – I assume this is still accurate)
So Kamailio does not handle its own buffers for incoming messages, but simply pulls one message from the kernel queue whenever a process is available for processing.
(Correct me if this is not true)
Hereafter some system parameters that seem interesting to me.
In particular, « net.core.rmem.default » seem very low (0,33 MB). But « net.core.rmem.max » is much higher (16 MB).
Looking at the code in function probe_max_receive_buffer (src\core\udp_server.c), it seems Kamailio is trying to increase the buffer size from the default.
However this does not seem to be working, seeing the following logs upon startup :
0(7602) INFO: <core> [core/udp_server.c:154]: probe_max_receive_buffer(): SO_RCVBUF is initially 349520
0(7602) INFO: <core> [core/udp_server.c:206]: probe_max_receive_buffer(): SO_RCVBUF is finally 349520
0(7602) INFO: <core> [core/udp_server.c:154]: probe_max_receive_buffer(): SO_RCVBUF is initially 349520
0(7602) INFO: <core> [core/udp_server.c:206]: probe_max_receive_buffer(): SO_RCVBUF is finally 349520
0(7602) INFO: <core> [core/udp_server.c:154]: probe_max_receive_buffer(): SO_RCVBUF is initially 349520
0(7602) INFO: <core> [core/udp_server.c:206]: probe_max_receive_buffer(): SO_RCVBUF is finally 349520
Is there something I don’t understand ? could it be a bug ?
One quick fix I can think of would be to increase « net.core.rmem.default ». Considering that I don’t have many sockets (3 used by Kamailio), I think I could set this to 16 MB (my VMs have 8 GB of allocated RAM).
What do you think ?
Thanks in advance for your help.
My system parameters :
net.core.rmem_default = 349520
net.core.rmem_max = 16777216
# => 0,33 MB / 16 MB respectively
·
net.core.rmem_default, net.core.rmem_max
–
default and max socket receive buffer size in bytes. Each socket gets rmem_default
reveive
buffer size by default, and can request up to rmem_max
with setsockopt option SO_RCVBUF
.
net.ipv4.udp_mem = 188856 251809 377712
·
net.ipv4.udp_mem = "min pressure max" –
these are numbers of PAGES (4KB) available for all UDP sockets in the system. min, pressure and max controls how memory is managed, but main point is that max is maximum size in PAGES for all UDP bufers in system. These values are set on boot time (if
sysctls are not explicitly set) according to available RAM size.
# 377712 x 4 KB => 1475 MB
net.ipv4.udp_rmem_min = 4096
·
net.ipv4.udp_rmem_min, net.ipv4.udp_wmem –
minimal size for receive/send buffers (in bytes), guaranteed for each socket, even if if buffer size of all UDP sockets exceeds pressure parameter in net.ipv4.udp_mem.
Regards,
Nicolas.
This message contains information that may be privileged or confidential and is the property of the Capgemini Group. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message.