I didn’t know about that parameter… thanks for your help :)
Regards,
Nicolas.
De : Daniel-Constantin Mierla <miconda@gmail.com>
Envoyé : mardi 23 novembre 2021 17:37
Hello,
Kamailio tries to increase it up to 256kB if it is lower. If it is higher, probably stays the same. You can try to set -b cli parameter of maxbuffer core parameter to higher values.
Cheers,
Daniel
On 23.11.21 16:10, Chaigneau, Nicolas wrote:
Hello,
I’m using Kamailio 5.5.2 with UDP sockets, configured with 32 children processes.
During load tests, some packets seem to be dropped. I’m trying to figure out why, and how to solve (or mitigate) the issue.
From what I understand :
« the SIP messages send on UDP/SCTP are received directly from the buffer in kernel one by one, each being processed once read. »
(this is a message from Daniel-Constantin Mierla, posted on the mailing list back in 2014 – I assume this is still accurate)
So Kamailio does not handle its own buffers for incoming messages, but simply pulls one message from the kernel queue whenever a process is available for processing.
(Correct me if this is not true)
Hereafter some system parameters that seem interesting to me.
In particular, « net.core.rmem.default » seem very low (0,33 MB). But « net.core.rmem.max » is much higher (16 MB).
Looking at the code in function probe_max_receive_buffer (src\core\udp_server.c), it seems Kamailio is trying to increase the buffer size from the default.
However this does not seem to be working, seeing the following logs upon startup :
0(7602) INFO: <core> [core/udp_server.c:154]: probe_max_receive_buffer(): SO_RCVBUF is initially 349520
0(7602) INFO: <core> [core/udp_server.c:206]: probe_max_receive_buffer(): SO_RCVBUF is finally 349520
0(7602) INFO: <core> [core/udp_server.c:154]: probe_max_receive_buffer(): SO_RCVBUF is initially 349520
0(7602) INFO: <core> [core/udp_server.c:206]: probe_max_receive_buffer(): SO_RCVBUF is finally 349520
0(7602) INFO: <core> [core/udp_server.c:154]: probe_max_receive_buffer(): SO_RCVBUF is initially 349520
0(7602) INFO: <core> [core/udp_server.c:206]: probe_max_receive_buffer(): SO_RCVBUF is finally 349520
Is there something I don’t understand ? could it be a bug ?
One quick fix I can think of would be to increase « net.core.rmem.default ». Considering that I don’t have many sockets (3 used by Kamailio), I think I could set this to 16 MB (my VMs have 8 GB of allocated RAM).
What do you think ?
Thanks in advance for your help.
My system parameters :
net.core.rmem_default = 349520
net.core.rmem_max = 16777216
# => 0,33 MB / 16 MB respectively
1.
net.core.rmem_default, net.core.rmem_max
– default and max socket receive buffer size in bytes. Each socket getsrmem_default
reveive buffer size by default, and can request up tormem_max
with setsockopt optionSO_RCVBUF
.
net.ipv4.udp_mem = 188856 251809 377712
1. net.ipv4.udp_mem = "min pressure max" – these are numbers of PAGES (4KB) available for all UDP sockets in the system. min, pressure and max controls how memory is managed, but main point is that max is maximum size in PAGES for all UDP bufers in system. These values are set on boot time (if sysctls are not explicitly set) according to available RAM size.
# 377712 x 4 KB => 1475 MB
net.ipv4.udp_rmem_min = 4096
1. net.ipv4.udp_rmem_min, net.ipv4.udp_wmem – minimal size for receive/send buffers (in bytes), guaranteed for each socket, even if if buffer size of all UDP sockets exceeds pressure parameter in net.ipv4.udp_mem.
Regards,
Nicolas.