Hello Dears
Recently i'm trying to call B user as a tel:"number" from a registered ims
A user
i'm facing a strange behavior when the proxy is translating
tel:"number" to sip:"number"@mydomain
i don't want this kind of translation to be done on the proxy, but to be
forwarded to the serving node as it is.
is there any place in which i should modify PCSCF configuration ?
Regards
Hi,
this morning, two Kamailio servers suddenly stopped working after having
worked without a problem for months. Both are running 5.0.2 on Debian
Jessie. Those systems only analyze mirrored traffic and write some
information to a RabbitMQ.
I tried restarting, but after a few seconds they get stuck the same way as
before. Then I attached a gdb to some of the UDP listeners, and they all
look pretty much the same.
===========8< process 1 =================
(gdb) bt
#0 0x00007f404ca435b9 in syscall () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007f4049f41260 in futex_get (lock=0x7f4041bccd00) at
../../core/parser/../mem/../futexlock.h:108
#2 0x00007f4049f477d7 in cfg_lock_helper (lkey=0x7fff3c048650, mode=0) at
cfgutils.c:667
#3 0x00007f4049f481ed in w_cfg_lock_wrapper (msg=0x7f404ba17f18,
key=0x7f404ba15488, mode=0) at cfgutils.c:712
#4 0x00007f4049f4823c in w_cfg_lock (msg=0x7f404ba17f18,
key=0x7f404ba15488 "\210a\241K@\177", s2=0x0) at cfgutils.c:717
#5 0x000000000045cd85 in do_action (h=0x7fff3c049410, a=0x7f404b9d5170,
msg=0x7f404ba17f18) at core/action.c:1060
#6 0x0000000000469afa in run_actions (h=0x7fff3c049410, a=0x7f404b9d5170,
msg=0x7f404ba17f18) at core/action.c:1552
#7 0x000000000045958d in do_action (h=0x7fff3c049410, a=0x7f404b9ce558,
msg=0x7f404ba17f18) at core/action.c:678
#8 0x0000000000469afa in run_actions (h=0x7fff3c049410, a=0x7f404b9cdf70,
msg=0x7f404ba17f18) at core/action.c:1552
#9 0x000000000046a2bd in run_top_route (a=0x7f404b9cdf70,
msg=0x7f404ba17f18, c=0x0) at core/action.c:1641
#10 0x0000000000580033 in receive_msg (
buf=0xa393f9 <buf+121> "REGISTER sip:domain SIP/2.0\r\nVia: SIP/2.0/UDP
1.2.3.4;branch=z9hG4bKfc98.c762d7151f110c7eb71fc7d4e0648f1f.0\r\nVia:
SIP/2.0/UDP 192.168.0.3:5060
;rport=38020;received=62.30.8.128;branch=z9hG4"...,
len=799, rcv_info=0x7fff3c049810) at core/receive.c:264
#11 0x00007f40483efcdf in parsing_hepv3_message (buf=0xa39380 <buf>
"HEP3\003\230", len=920) at hep.c:499
#12 0x00007f40483ee264 in hepv3_received (buf=0xa39380 <buf>
"HEP3\003\230", len=920, ri=0x7fff3c049a50) at hep.c:231
#13 0x00007f40483ec9cb in hep_msg_received (data=0x7fff3c049a30) at hep.c:85
#14 0x000000000049e4e1 in sr_event_exec (type=7, data=0x7fff3c049a30) at
core/events.c:263
#15 0x000000000048731a in udp_rcv_loop () at core/udp_server.c:466
#16 0x0000000000422d08 in main_loop () at main.c:1623
#17 0x000000000042a408 in main (argc=13, argv=0x7fff3c049ef8) at main.c:2643
(gdb) quit
===========8< process 1 =================
===========8< process 2 =================
(gdb) bt
#0 0x00007f404ca435b9 in syscall () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007f4049f41260 in futex_get (lock=0x7f4041bccd00) at
../../core/parser/../mem/../futexlock.h:108
#2 0x00007f4049f477d7 in cfg_lock_helper (lkey=0x7fff3c048650, mode=0) at
cfgutils.c:667
#3 0x00007f4049f481ed in w_cfg_lock_wrapper (msg=0x7f404ba17f18,
key=0x7f404ba15488, mode=0) at cfgutils.c:712
#4 0x00007f4049f4823c in w_cfg_lock (msg=0x7f404ba17f18,
key=0x7f404ba15488 "\210a\241K@\177", s2=0x0) at cfgutils.c:717
#5 0x000000000045cd85 in do_action (h=0x7fff3c049600, a=0x7f404b9d5170,
msg=0x7f404ba17f18) at core/action.c:1060
#6 0x0000000000469afa in run_actions (h=0x7fff3c049600, a=0x7f404b9d5170,
msg=0x7f404ba17f18) at core/action.c:1552
#7 0x000000000045958d in do_action (h=0x7fff3c049600, a=0x7f404b9cf920,
msg=0x7f404ba17f18) at core/action.c:678
#8 0x0000000000469afa in run_actions (h=0x7fff3c049600, a=0x7f404b9ce9c0,
msg=0x7f404ba17f18) at core/action.c:1552
#9 0x000000000046a2bd in run_top_route (a=0x7f404b9ce9c0,
msg=0x7f404ba17f18, c=0x7fff3c049600) at core/action.c:1641
#10 0x0000000000580a69 in receive_msg (
buf=0xa393f9 <buf+121> "SIP/2.0 200 OK\r\nVia: SIP/2.0/UDP
172.20.40.8;branch=z9hG4bKa69c.5cca5a54c4d270c515eeb4bbb5e0bb44.0\r\nVia:
SIP/2.0/UDP 2.3.4.5:5060\r\nContact:
<sip:1234567@9.8.7.6:59669;transport=UDP>\r\nTo:
"..., len=506, rcv_info=0x7fff3c049810) at core/receive.c:327
#11 0x00007f40483efcdf in parsing_hepv3_message (buf=0xa39380 <buf>
"HEP3\002s", len=627) at hep.c:499
#12 0x00007f40483ee264 in hepv3_received (buf=0xa39380 <buf> "HEP3\002s",
len=627, ri=0x7fff3c049a50) at hep.c:231
#13 0x00007f40483ec9cb in hep_msg_received (data=0x7fff3c049a30) at hep.c:85
#14 0x000000000049e4e1 in sr_event_exec (type=7, data=0x7fff3c049a30) at
core/events.c:263
#15 0x000000000048731a in udp_rcv_loop () at core/udp_server.c:466
#16 0x0000000000422d08 in main_loop () at main.c:1623
#17 0x000000000042a408 in main (argc=13, argv=0x7fff3c049ef8) at main.c:2643
(gdb) quit
===========8< process 2 =================
Looks to me as if Kamailio gets stuck trying to get a lock for this packet.
The config file when handling those packets looks like this:
request_route {
route(initialize_variables);
route(foo);
}
onreply_route {
route(initialize_variables);
route(foo);
}
route[initialize_variables] {
$vn(bar) = $null;
$vn(baz) = $null;
$vn(barbaz) = $null;
$vn(foobar) = $null;
}
route[foo] {
lock("$ci");
xlog("L_DBG", "Obtained lock, calling lua...\n");
if(!lua_run("handle_packet")) {
xlog("L_ERR", "SCRIPT: failed to execute lua function!\n");
}
if ($vn(bar) != $null) {
route(mediaIpToRedis);
}
if ($vn(baz) != $null) {
route(channelInfoToRedis);
}
if ($vn(barbaz) != $null) {
route(sendToQueue);
}
xlog("L_DBG", "All finished, releasing lock...\n");
unlock("$ci");
xlog("L_DBG", "Released lock...\n");
# update stats
if ($vn(foobar) != $null) {
update_stat("$vn(foobar)", "+1");
}
drop;
exit;
}
Is there any race condition I am missing? Until this morning, it ran
without problems and only every 2 minutes or so, 4 packets were sent to the
RabbitMQ. So the system throws away nearly 100% of the traffic (which is
around 20 Mbit).
In the log file, there are no errors at all, no aborted lua scripts or
whatsoever.
Does anybody have a hint for me? Thanks in advance.
Regards
Sebastian
Hello.
I installed Homer a couple of days ago on new Debian 8 minimal install with
HEP configured on Asterisk 14 on another machine. I installed Homer with the
quick install script available on Git Hub.
As the subject line states, no information is being stored in the daily
database tables being created. I have confirmed with tcpdump, the Asterisk
14 machine is sending out data on port 9060. On the Homer side, Kamailio is
confirmed listening on port 9060, and tcpdump confirms it is receiving that
Asterisk 14 SIP data to port 9060.
When starting up Kamailio:
In /var/log/messages this is all which is listed (even during calls):
Jan 12 11:31:06 Qbox-Homer homer[5815]: INFO: <core> [main.c:810]:
sig_usr(): signal 15 received
Jan 12 11:31:06 Qbox-Homer homer[5813]: INFO: <core> [main.c:810]:
sig_usr(): signal 15 received
Jan 12 11:31:06 Qbox-Homer homer[5811]: INFO: <core> [main.c:810]:
sig_usr(): signal 15 received
Jan 12 11:31:06 Qbox-Homer homer[5801]: INFO: <core> [sctp_core.c:53]:
sctp_core_destroy(): SCTP API not initialized
Jan 12 11:31:07 Qbox-Homer kamailio: INFO: <core> [sctp_core.c:75]:
sctp_core_check_support(): SCTP API not enabled - if you want to use it,
load sctp module
Jan 12 11:31:07 Qbox-Homer kamailio: WARNING: <core> [socket_info.c:1303]:
fix_hostname(): could not rev. resolve 0.0.0.0
Jan 12 11:31:07 Qbox-Homer kamailio: INFO: <core> [tcp_main.c:4665]:
init_tcp(): using epoll_lt as the io watch method (auto detected)
Jan 12 11:31:07 Qbox-Homer homer[20228]: INFO: sipcapture
[sipcapture.c:437]: parse_table_names(): INFO: table name:sip_capture
Jan 12 11:31:08 Qbox-Homer homer[20228]: INFO: <core> [udp_server.c:150]:
probe_max_receive_buffer(): SO_RCVBUF is initially 212992
Jan 12 11:31:08 Qbox-Homer homer[20228]: INFO: <core> [udp_server.c:200]:
probe_max_receive_buffer(): SO_RCVBUF is finally 425984
And here is Kamailio starting at debug level 2 in /var/log/syslog
Jan 12 11:44:29 Qbox-Homer systemd[1]: Starting Kamailio (OpenSER) - the
Open Source SIP Server...
Jan 12 11:44:29 Qbox-Homer kamailio: INFO: <core> [sctp_core.c:75]:
sctp_core_check_support(): SCTP API not enabled - if you want to use it,
load sctp module
Jan 12 11:44:29 Qbox-Homer kamailio: WARNING: <core> [socket_info.c:1303]:
fix_hostname(): could not rev. resolve 0.0.0.0
Jan 12 11:44:29 Qbox-Homer kamailio: INFO: <core> [tcp_main.c:4665]:
init_tcp(): using epoll_lt as the io watch method (auto detected)
Jan 12 11:44:29 Qbox-Homer kamailio[23757]: Listening on
Jan 12 11:44:29 Qbox-Homer kamailio[23757]: udp: 0.0.0.0:9060
Jan 12 11:44:29 Qbox-Homer kamailio[23757]: Aliases:
Jan 12 11:44:29 Qbox-Homer homer[23759]: INFO: sipcapture
[sipcapture.c:437]: parse_table_names(): INFO: table name:sip_capture
Jan 12 11:44:29 Qbox-Homer homer[23759]: INFO: <core> [udp_server.c:150]:
probe_max_receive_buffer(): SO_RCVBUF is initially 212992
Jan 12 11:44:29 Qbox-Homer homer[23759]: INFO: <core> [udp_server.c:200]:
probe_max_receive_buffer(): SO_RCVBUF is finally 425984
Jan 12 11:44:29 Qbox-Homer systemd[1]: Started Kamailio (OpenSER) - the Open
Source SIP Server.
No more messages appear during, or after a call is terminated.
Any more suggestions?
Thanks!
Is this possible? I'm trying with:
if (auth_check("$fd", "subscriber", "1")==-2) {
log something
}
i'm trying to find out if the username actually exists in the db, but the
password provided was wrong...
Regards,
David Villasmil
email: david.villasmil.work(a)gmail.com
phone: +34669448337
ᐧ
Hi
I tried send_data it transmits the data however the socket connect remains open.
Is it possibe to just send.
I dont want the other end to need to close the connect or reply.
thanks
Hi,
I have some kind of memory growth (leak?).
kamailio 4.4.1.
started with 64M for shmem, had a crash after 5 days of traffic.
increased shmem to 128, but still memory grows everyday.
when traffic load decrease, the memory growth stops but memory stays on the
same level. when traffic increase again,used memory continue to grows again.
I started kamailio now with the "-x qm" option to debug shmem.
every half hour i dump the status of it.
there are several modules that the memory size is increasing (some of them
are obvious).
but, one is very strange.... DROUTING
I am using drouting module for each call on my kamailio.
the DB tables are very small, and there is no reloads.
only one time used in my script :
..
...
subst_user('/(.*)/$avp(xxx)/');
if(!do_routing("$avp(yyyy)")){
xlog...somthing;
return(-1)
}
..
....
the shmem that is rapidly growing and does not make sense is:
"from drouting: dr_time.c: ac_get_maxval(219)"
seems that when i used mem_join =1, the growth was smaller, but still
significant.
now i use mem_join = 0, it seems rapidly increasing...
I have more information from the logs, will send it if necessary (it is
just a lot...)
any ideas ?
cheers,
Uri
I'm trying to verify the auth username begins with, i.e. "32", but i'm
using in AUTH route:
if( !($au=~"32") ){
do something
}
and that's not working, and i don't get why?
Regards,
David Villasmil
email: david.villasmil.work(a)gmail.com
phone: +34669448337
ᐧ
Hi.
In the below part from kamailio.cfg, why didn't we just exit if
t_precheck_trans returns true ?
Why did we run t_check_trans, although there might be another process
already running the other t_check_trans ?
# handle retransmissions
if (!is_method("ACK")) {
if(t_precheck_trans()) {
t_check_trans();
exit;
}
t_check_trans();
}
Hi,
We use kamailio as a router in our PBX server. The test below causes
significant decreasing of free shared memory and
many "out of memory" errors in kamailio log:
Preconditions: kamailio server with 256Mb of shared memory allocated.
Test actions: run two load testing scripts at the same time:
1) 10000 register operations;
2) 20000 calls (about 300 per sec);
Results:
1) memory consumption in peak values:
- shmem.free_size: 1.8Mb
- core: 120Mb:
- build_req_buf_from_sip_req: 12.5Mb
- msg_lump_cloner: 23.6Mb
- sip_msg_shm_clone: 84Mb
- tm module: 118Mb
2) shmem.free_size parameter back to normal after the test, so it could
be concluded that we don't have any memory leaks.
Is it normal behavior or it's possible to do something to increase
performance?
PS: the same behaviour occurs for various values of shared memory defined
at the start of kamailio server (128Mb, 256Mb, 512Mb etc.).
Thank you,
Andrey