Hello,
first, it is better if you test with latest version from the branch 5.3, because 5.3.2 is already outdated in that series. Otherwise, we may troubleshoot a side effect of an issue already fixed.
Second, the function that allocated most of the shm memory is related to transaction, cloning the message in shared memory. Can you fetch the stats and see how many transactions are listed when you face that issues?
kamctl stats
Cheers,
Daniel
Hello there.
During my stress tests against our kamailio servers I detected that it was running out of shm memory.
edge-sip-proxy[6119]: ERROR: ESP_LOG: 10405249-1580@10.225.121.206: <core> [core/mem/q_malloc.c:298]: qm_find_free(): qm_find_free(0x7ff370920000, 4224); Free fragment not found!
edge-sip-proxy[6119]: ERROR: ESP_LOG: 10405249-1580@10.225.121.206: <core> [core/mem/q_malloc.c:432]: qm_malloc(): qm_malloc(0x7ff370920000, 4224) called from core: core/sip_msg_clone.c: sip_msg_shm_clone(496), module: core; Free fragment not found!
edge-sip-proxy[6119]: ERROR: ESP_LOG: 10405249-1580@10.225.121.206: <core> [core/sip_msg_clone.c:499]: sip_msg_shm_clone(): could not allocate shared memory from shm pool
edge-sip-proxy[6119]: ERROR: ESP_LOG: 10405249-1580@10.225.121.206: tm [t_lookup.c:1293]: new_t(): out of mem:
edge-sip-proxy[6119]: ERROR: ESP_LOG: 10405249-1580@10.225.121.206: tm [t_lookup.c:1439]: t_newtran(): new_t failed
edge-sip-proxy[6119]: ERROR: ESP_LOG: 10405249-1580@10.225.121.206: sl [sl_funcs.c:392]: sl_reply_error(): stateless error reply used: I'm terribly sorry, server error occurred (1/SL)
The output of kamctl stats shmem confirms that kamailio is running out of memory:{
"jsonrpc": "2.0",
"result": [
"shmem:fragments = 36427",
"shmem:free_size = 14123552",
"shmem:max_used_size = 268435456",
"shmem:real_used_size = 254311904",
"shmem:total_size = 268435456",
"shmem:used_size = 215787904"
],
"id": 10901
}
Then I was check what was the module that were consuming more memory with kamcmd mod.stats all shm and the output of this command showed me that Core module was consuming most of shm memory, as you can see bellow is the output of this command shows that Core - build_req_buf_from_sip_req is consuming most of shm.Module: core
{
sip_msg_shm_clone(496): 4872856
create_avp(175): 24768
msg_lump_cloner(986): 154376
xavp_new_value(106): 780896
build_req_buf_from_sip_req(2187): 183984624
counters_prefork_init(211): 36864
cfg_clone_str(130): 96
cfg_shmize(217): 848
main_loop(1303): 8
init_pt(106): 8
init_pt(105): 8
init_pt(104): 5256
register_timer(995): 232
cfg_register_ctx(47): 64
init_tcp(5021): 8192
init_tcp(5015): 32768
init_tcp(5007): 8
init_tcp(5000): 8
init_tcp(4993): 8
init_tcp(4987): 8
init_tcp(4975): 8
init_avps(90): 8
init_avps(89): 8
init_dst_blacklist(438): 16384
init_dst_blacklist(430): 8
timer_alloc(498): 96
init_dns_cache(361): 8
init_dns_cache(352): 16384
init_dns_cache(344): 16
init_dns_cache(336): 8
init_timer(267): 8
init_timer(266): 16384
init_timer(265): 8
init_timer(264): 8
init_timer(253): 8
init_timer(221): 8
init_timer(210): 278544
init_timer(209): 8
init_timer(197): 8
cfg_child_cb_new(829): 64
sr_cfg_init(361): 8
sr_cfg_init(354): 8
sr_cfg_init(347): 8
sr_cfg_init(335): 8
sr_cfg_init(323): 8
qm_shm_lock_init(1202): 8
Total: 190229920
}
Then I stopped the sipp that was running against kamailio in order to see if the memory consumed by build_req_buf_from_sip_req decreased but it didn't, so, it seems that there is a memleak in this version.If the information that I'm sending in this email is not enough to identify the root cause of this, please let me know if there is anything else that I can do here to help identify better what is the main reason.Than you
Best Regards--
CumprimentosJosé Seabra
_______________________________________________ Kamailio (SER) - Users Mailing List sr-users@lists.kamailio.org https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-users
-- Daniel-Constantin Mierla -- www.asipto.com www.twitter.com/miconda -- www.linkedin.com/in/miconda Funding: https://www.paypal.me/dcmierla