Hi Dear Community,
Would appreciate your input into where to look further to explore my
options and would be good to learn your experience with the following.
There are many active-active kamailio nodes behind tcp load balancer. On
the same VM with each Kamailio there is an instance of RTPEngine to manage
Media/Transcoding/Etc for that Kamailio. This scales UP easily - new nodes
just added behind tcp load balancer. To be able to scale down and/or update
node configuration - I'd like to be able to drain sessions/calls properly
and pass any active calls from current Kamailio and RTPEngine to other
nodes.
DMQ is not yet in place, although it's planned.
What would be proper ways of migrating RTP sessions to other RTPEngines?
Caveats?
Also wonder how properly a tcp/tls based SIP session can be passed to
another Kamailio?
Thank you in advance for your input.
Evgenii Buchnev
Hello,
about 8 months since the last major release v5.8.x, therefore it is the
time to plan a bit the next steps for getting v6.0.0 out.
The Kamailio Developers Meeting is schedule in Dusseldorf, Germany
during November 19-20, 2024, where we expect to push out a consistent
number of updates. Then maybe we can wait for another 2 weeks or so to
settle down/finish those updates. After that, if no other new features
are planned by developers in short term, we can enter the testing phase,
so the new major version can be released during the second part of
January 2025 or first part of February 2025.
Therefore, if you have new features under development that might require
more than 1-1.5 months from now, and you want to get them in v6.0.0, let
us know to plan properly.
Other time lines suggestions to get to v6.0.0 are also welcome!
Cheers,
Daniel
--
Daniel-Constantin Mierla (@ asipto.com)
twitter.com/miconda -- linkedin.com/in/miconda
Kamailio Consultancy, Training and Development Services -- asipto.com
Hi,
I was using kamailio 5.2.5. I have recently upgraded to kamailio 5.5.7.
After upgrade, the alias is stop being add to the ACK of reinvite from
upstream. Before upgrade, it was working fine.
I’ll appreciate any help on this. Thank you.
Regards,
Miteshkumar Thakkar
Hi all!
I am having difficulties setting up SNMP, and mostly to figure out if what
I have setup is correctly configured.
This is what I have on Kamailio (5.8.2), which is set as stateless and as a
redirect server (simply put, it receives INVITE from SBC, gets a route from
routing services, modifies Contact header, and replies to SBC with 300
Multi Choice with new contact header):
loadmodule "db_mysql.so"
loadmodule "db_cluster.so"
loadmodule "http_client.so"
loadmodule "jsonrpcs.so"
loadmodule "kex.so"
loadmodule "corex.so"
loadmodule "sl.so"
loadmodule "pv.so"
loadmodule "maxfwd.so"
loadmodule "textops.so"
loadmodule "xlog.so"
loadmodule "sanity.so"
loadmodule "jansson.so"
loadmodule "snmpstats.so"
loadmodule "file_out.so"
loadmodule "ctl.so"
loadmodule "permissions.so"
loadmodule "xhttp.so"
loadmodule "xhttp_rpc.so"
# ---- SNMP Stats params ----
modparam("snmpstats", "sipEntityType", "proxyServer")
modparam("snmpstats", "sipEntityType", "redirectServer")
modparam("snmpstats", "sipEntityType", "other")
modparam("snmpstats", "snmpgetPath", "/usr/bin/")
modparam("snmpstats", "MsgQueueMinorThreshold", 1)
modparam("snmpstats", "MsgQueueMajorThreshold", 1)
modparam("snmpstats", "dlg_minor_threshold", 1)
modparam("snmpstats", "dlg_major_threshold", 1)
modparam("snmpstats", "snmpCommunity", "kamailio")
I have set the *_minor_threshold values to bear minimum. Should I add
anything else?
The logs show the following messages:
DEBUG: <core> [core/mem/q_malloc.c:402]: qm_malloc():
qm_malloc(0x7f03af1ff010, 60) called from snmpstats: snmp_statistics.c:
get_socket_list_from_proto_and_family(185)
DEBUG: <core> [core/mem/q_malloc.c:449]: qm_malloc():
qm_malloc(0x7f03af1ff010, 64) returns address 0x7f03af3bcc50 frag.
0x7f03af3bcc10 (size=64) on 1 -th hit
DEBUG: <core> [core/mem/q_malloc.c:402]: qm_malloc():
qm_malloc(0x7f03af1ff010, 20) called from snmpstats: snmp_statistics.c:
get_socket_list_from_proto_and_family(185)
DEBUG: <core> [core/mem/q_malloc.c:449]: qm_malloc():
qm_malloc(0x7f03af1ff010, 32) returns address 0x7f03af3bcd00 frag.
0x7f03af3bccc0 (size=32) on 1 -th hit
DEBUG: <core> [core/mem/q_malloc.c:515]: qm_free(): qm_free(0x7f03af1ff010,
0x7f03af3bcc50), called from snmpstats: snmp_statistics.c:
get_total_bytes_waiting(464)
DEBUG: <core> [core/mem/q_malloc.c:562]: qm_free(): freeing frag.
0x7f03af3bcc10 alloc'ed from snmpstats: snmp_statistics.c:
get_socket_list_from_proto_and_family(185)
DEBUG: <core> [core/mem/q_malloc.c:515]: qm_free(): qm_free(0x7f03af1ff010,
0x7f03af3bcd00), called from snmpstats: snmp_statistics.c:
get_total_bytes_waiting(471)
DEBUG: <core> [core/mem/q_malloc.c:562]: qm_free(): freeing frag.
0x7f03af3bccc0 alloc'ed from snmpstats: snmp_statistics.c:
get_socket_list_from_proto_and_family(185)
DEBUG: <core> [core/mem/q_malloc.c:402]: qm_malloc():
qm_malloc(0x7f03af1ff010, 60) called from snmpstats: snmp_statistics.c:
get_socket_list_from_proto_and_family(185)
DEBUG: <core> [core/mem/q_malloc.c:449]: qm_malloc():
qm_malloc(0x7f03af1ff010, 64) returns address 0x7f03af3bcc50 frag.
0x7f03af3bcc10 (size=64) on 1 -th hit
DEBUG: <core> [core/mem/q_malloc.c:402]: qm_malloc():
qm_malloc(0x7f03af1ff010, 20) called from snmpstats: snmp_statistics.c:
get_socket_list_from_proto_and_family(185)
DEBUG: <core> [core/mem/q_malloc.c:449]: qm_malloc():
qm_malloc(0x7f03af1ff010, 32) returns address 0x7f03af3bcd00 frag.
0x7f03af3bccc0 (size=32) on 1 -th hit
DEBUG: <core> [core/mem/q_malloc.c:515]: qm_free(): qm_free(0x7f03af1ff010,
0x7f03af3bcc50), called from snmpstats: snmp_statistics.c:
get_total_bytes_waiting(464)
DEBUG: <core> [core/mem/q_malloc.c:562]: qm_free(): freeing frag.
0x7f03af3bcc10 alloc'ed from snmpstats: snmp_statistics.c:
get_socket_list_from_proto_and_family(185)
DEBUG: <core> [core/mem/q_malloc.c:515]: qm_free(): qm_free(0x7f03af1ff010,
0x7f03af3bcd00), called from snmpstats: snmp_statistics.c:
get_total_bytes_waiting(471)
DEBUG: <core> [core/mem/q_malloc.c:562]: qm_free(): freeing frag.
0x7f03af3bccc0 alloc'ed from snmpstats: snmp_statistics.c:
get_socket_list_from_proto_and_family(185)
INFO: snmpstats [utilities.c:116]: get_statistic(): failed to retrieve
statistics for active_dialogs
INFO: snmpstats [utilities.c:116]: get_statistic(): failed to retrieve
statistics for active_dialogs
The SNMP daemon is set as follows:
- /etc/snmp]$ cat snmptrapd.conf
# Example configuration file for snmptrapd
#
# No traps are handled by default, you must edit this file!
#
# authCommunity log,execute,net public
# traphandle SNMPv2-MIB::coldStart /usr/bin/bin/my_great_script cold
and on /etc/snmp/snmpd.conf:
###############################################################################
# Further Information
#
# See the snmpd.conf manual page, and the output of "snmpd -H".
agentaddress udp:161
rocommunity kamailio
master agentx
agentXSocket tcp:localhost:705
trap2sink 10.X.Y.Z netcool 162
When running tcpdump and listening on port 162, no traffic is
captured....it seems to me that Kamailio is not sending anything to SNMP
Any clue, anyone? What am I doing wrong?
Atenciosamente / Kind Regards / Cordialement / Un saludo,
*Sérgio Charrua*
The sdp_get() function from sdpops returns an extra blank line
sometimes, which looks like a bug to me. When I have an SDP that's in
MIME, there's an extra newline at the end of the extracted SDP.
Here's a sample config. I use 5.8.1 for testing.
####################
#!KAMAILIO
listen=udp:203.0.113.57:5060
debug=1
loadmodule "sdpops"
loadmodule "pv"
loadmodule "xlog"
request_route{exit;}
reply_route {
sdp_get("$avp(sdp)");
xlog("L_NOTICE", "[$ci] The SDP is [$avp(sdp)].\n");
}
####################
Here are two sample SIP-message bodies.
####################
v=0
o=PCD 1730 1730 IN IP4 198.18.114.148
s=PCD
c=IN IP4 198.18.114.148
t=0 0
m=audio 29426 RTP/AVP 0
a=rtpmap:0 PCMU/8000
####################
####################
--e61e7a6a-16a2-46c0-adbb-a41ba86d32a2
Content-Type: application/sdp
Content-Length: 130
v=0
o=PCD 1730 1730 IN IP4 198.18.114.148
s=PCD
c=IN IP4 198.18.114.148
t=0 0
m=audio 29426 RTP/AVP 0
a=rtpmap:0 PCMU/8000
--e61e7a6a-16a2-46c0-adbb-a41ba86d32a2--
####################
When the first body is received, the xlog correctly ends with this.
m=audio 29426 RTP/AVP 0
a=rtpmap:0 PCMU/8000
] with length 130
When the second body is received, the xlog incorrectly ends with this,
which has an extra newline in it.
m=audio 29426 RTP/AVP 0
a=rtpmap:0 PCMU/8000
] with length 132
According to RFC2046, "The CRLF preceding the boundary delimiter line
is conceptually attached to the boundary", but it looks like kamailio
has a bug that incorrectly processes this.
James
Hello,
In our system we've encountered the following situation.
There's a Kamailio proxy and a few REGISTER'ed devices behind the proxy. When there's an INVITE to these multiple devices, two of them simultaneously answer the call. Kamailio proxy accepts the first 200 OK from one of the devices and tries to CANCEL all other branches. But the second 200 OK device have already responded OK and doesn't acknowledge the CANCEL. So, there's now one established dialog with media, and another "established" one neither having media nor being correctly terminated.
Looks like the similar scenario is described in section 3.1.2 of RFC 5407 (Example Call Flows of Race Conditions in the Session Initiation Protocol (SIP)).
Is it possible to mitigate such scenarios in a Kamailio proxy? E.g., is it possible to send an additional BYE to these CANCEL'led branches?
In current configuration, Kamailio proxy retrieves contacts with registrar:lookup() and does parallel forking with tm:t_relay().
I gave APIBAN a try. After a few days, no requests were banned by
APIBAN, but during the same period, lots of request were banned by other
checks in my config. That made me wonder if APIBAN really is effective.
-- Juha
Hello,
This is kamailio version 5.8.2 and we`re having this issue every now and then. The error displayed is:
** ERROR: Error opening Kamailio's FIFO /run/kamailio/kamailio_rpc.fifo
** ERROR: Make sure you have loaded the jsonrpcs module and set FIFO transport parameters
The problem is that the FIFO file /run/kamailio/kamailio_rpc.fifo doesn't exist, even though the "jsonrpc" module is in the Kamailio configuration, and the file is created when Kamailio service is restarted. But as mentioned, after few days the issue is repeated.
I`ll appreciate any suggestion or advice of what I should check.
Thank you!