We found a memory leak in the AVP cleanup
To reproduce, you need to start Kaamilio 5.8 branch with the config
listen=127.0.0.1:5060
tcp_send_timeout=3
loadmodule "xlog.so"
loadmodule "kex.so"
loadmodule "pv.so"
loadmodule "ctl.so"
loadmodule "jsonrpcs.so"
loadmodule "tm.so"
loadmodule "siptrace.so"
modparam("siptrace", "trace_to_database", 0)
modparam("siptrace", "trace_on", 1)
modparam("siptrace", "trace_mode", 1)
modparam("siptrace", "trace_init_mode", 1)
loadmodule "permissions.so"
modparam("permissions", "address_file", "address.list")
modparam("permissions", "peer_tag_avp", "$avp(s:tag)")
modparam("permissions", "peer_tag_mode", 0)
modparam("permissions", "load_backends", 1)
modparam("permissions", "reload_delta", 1)
loadmodule "dispatcher.so"
modparam("dispatcher", "list_file", "/tmp/dispatcher.list")
modparam("dispatcher", "ds_ping_from", "sip:proxy@aggregator.nga911.com")
modparam("dispatcher", "ds_ping_interval", 3)
modparam("dispatcher", "ds_probing_mode", 1)
route{
drop;
}
event_route[siptrace:msg] {
if (allow_address("1", "$siptrace(src_hostip)", "0")) {
xerr("SIP message from $siptrace(src_hostip)\n");
}
}
Also need to create address.list
with content
# address file - records to match with allow_address(...) and variants
# * file format details
# - comments start with # and go to end of line
# - each line corresponds to a record with following attributes:
#
# (groupid,int) (address,str) (netmask,int,o), (port,int,o) (tag,str,o)
#
# * description of the tokens used to describe line format
# - int: expected integer value
# - str: expected string value
# - o: optional field
1 127.0.0.0 16 0 tag
and /tmp/dispatcher.list
with content
1 sip:sip.telnyx.com:5060;transport=tcp 8
When kamailio started required to execute a command
kamcmd mod.stats core shm| grep avp
This will produce output like
create_avp(178): 1360
init_avps(92): 16
init_avps(91): 16
Where create_avp
is constantly increasing
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.