Hello,
I'm running kamailio:4.2:df86f2a9a09339687af5914b85fe8bd8f8f1f575 and am
getting a periodic crash, once every few days, in response to a CANCEL
message.
The basic back trace is like this:
(gdb) where
#0 0x000000000044fed2 in del_nonshm_lump (lump_list=0x7f48f5681440)
at data_lump.c:677
#1 0x00007f48f5404a15 in free_faked_req (faked_req=0x7f48f5680ea0,
t=0x7f46ee6faeb0) at t_reply.c:975
#2 0x00007f48f5405bdf in run_failure_handlers (t=0x7f46ee6faeb0,
rpl=0xffffffffffffffff, code=487, extra_flags=0) at t_reply.c:1061
#3 0x00007f48f54084e4 in t_should_relay_response (Trans=0x7f46ee6faeb0,
new_code=487, branch=0, should_store=0x7fffce5605e0,
should_relay=0x7fffce5605e4, cancel_data=0x7fffce5606b0,
reply=0xffffffffffffffff) at t_reply.c:1406
#4 0x00007f48f540b045 in relay_reply (t=0x7f46ee6faeb0,
p_msg=0xffffffffffffffff, branch=0, msg_status=487,
cancel_data=0x7fffce5606b0, do_put_on_wait=1) at t_reply.c:1809
#5 0x00007f48f5386832 in cancel_branch (t=0x7f46ee6faeb0, branch=0,
reason=0x0, flags=10) at t_cancel.c:276
#6 0x00007f48f53aff4a in e2e_cancel (cancel_msg=0x7f48f68d69d8,
t_cancel=0x7f46ee8d9c30, t_invite=0x7f46ee6faeb0) at t_fwd.c:1373
#7 0x00007f48f53b4bd0 in t_relay_cancel (p_msg=0x7f48f68d69d8) at
t_fwd.c:1967
#8 0x00007f48f53deaa7 in w_t_relay_cancel (p_msg=0x7f48f68d69d8, _foo=0x0,
_bar=0x0) at tm.c:1743
#9 0x000000000041d364 in do_action (h=0x7fffce560fc0, a=0x7f48f6689f70,
msg=0x7f48f68d69d8) at action.c:1088
#10 0x0000000000429a7a in run_actions (h=0x7fffce560fc0, a=0x7f48f6689f70,
msg=0x7f48f68d69d8) at action.c:1583
#11 0x000000000042a0df in run_actions_safe (h=0x7fffce5622b0,
a=0x7f48f6689f70, msg=0x7f48f68d69d8) at action.c:1648
#12 0x0000000000541158 in rval_get_int (h=0x7fffce5622b0,
msg=0x7f48f68d69d8,
i=0x7fffce561498, rv=0x7f48f668a1e0, cache=0x0) at rvalue.c:924
#13 0x0000000000545390 in rval_expr_eval_int (h=0x7fffce5622b0,
msg=0x7f48f68d69d8, res=0x7fffce561498, rve=0x7f48f668a1d8)
at rvalue.c:1918
#14 0x0000000000545786 in rval_expr_eval_int (h=0x7fffce5622b0,
msg=0x7f48f68d69d8, res=0x7fffce561920, rve=0x7f48f668a948)
at rvalue.c:1926
#15 0x000000000041ce4e in do_action (h=0x7fffce5622b0, a=0x7f48f668c260,
msg=0x7f48f68d69d8) at action.c:1064
#16 0x0000000000429a7a in run_actions (h=0x7fffce5622b0, a=0x7f48f6689808,
msg=0x7f48f68d69d8) at action.c:1583
#17 0x000000000041d2cd in do_action (h=0x7fffce5622b0, a=0x7f48f668c960,
msg=0x7f48f68d69d8) at action.c:1079
#18 0x0000000000429a7a in run_actions (h=0x7fffce5622b0, a=0x7f48f667c628,
msg=0x7f48f68d69d8) at action.c:1583
#19 0x000000000042a1a7 in run_top_route (a=0x7f48f667c628,
msg=0x7f48f68d69d8,
c=0x0) at action.c:1669
#20 0x0000000000507e1a in receive_msg (
buf=0xa6f760 "CANCEL sip:yyy@xxx:5060 SIP/2.0\r\nVia: SIP
/2.0/UDP xxx:5060;branch=z9hG4bK08f.3dc6f0e1.0\r\nFrom: \"yyy\"
<sip:yyy@xxx>;tag=D78eD8FB3SDgc\r\nCall-ID:
e5e48a99-48dd-1233-96b7-782bcb13da6a\r\nTo: <sip:xxx@xxx:5060>\r\nCSeq:
73049624 CANCEL\r\nMax-Forwards: 32\r\nUser-Agent: OpenSIPS (1.9.1-notls
(x86_64/linux))\r\nContent-Length: 0\r\n\r\n", len=394,
rcv_info=0x7fffce5625a0)
at receive.c:216
#21 0x00000000006074ae in udp_rcv_loop () at udp_server.c:521
#22 0x00000000004a5f0b in main_loop () at main.c:1629
#23 0x00000000004ab8bf in main (argc=11, argv=0x7fffce5629c8) at main.c:2578
I'll send a 'thread apply all bt full' privately due to the amount of
private addresses in there, but a quick glance suggests a possible
problem is here:
#5 0x00007f48f5386832 in cancel_branch (t=0x7f46ee6faeb0, branch=0,
reason=0x0, flags=10) at t_cancel.c:276
cancel = 0x1 <Address 0x1 out of bounds>
len = 32584
crb = 0x7f46ee6fb0b0
irb = 0x7f46ee6fb028
ret = 1
tmp_cd = {cancel_bitmap = 0, reason = {cause = 0, u = {text = {
s = 0x0, len = 0}, e2e_cancel = 0x0, packed_hdrs = {s =
0x0,
len = 0}}}}
pcbuf = 0x7f46ee6fb0c0
__FUNCTION__ = "cancel_branch"
#6 0x00007f48f53aff4a in e2e_cancel (cancel_msg=0x7f48f68d69d8,
t_cancel=0x7f46ee8d9c30, t_invite=0x7f46ee6faeb0) at t_fwd.c:1373
cancel_bm = 1
reason = 0x0
free_reason = 0
i = 0
lowest_error = 0
ret = 32584
tmcb = {req = 0x137f66ce710, rpl = 0x7f48f68d69d8, param =
0xf48ab828,
code = -158504488, flags = 32584, branch = 0,
t_rbuf = 0xf80f668f9a0, dst = 0xce5622b0, send_buf = {
s = 0x1ffffffff <Address 0x1ffffffff out of bounds>,
len = -304107664}}
__FUNCTION__ = "e2e_cancel"
#7 0x00007f48f53b4bd0 in t_relay_cancel (p_msg=0x7f48f68d69d8) at
t_fwd.c:1967
t_invite = 0x7f46ee6faeb0
t = 0x7f46ee8d9c30
ret = -323705680
new_tran = 1
Thanks,
-- Alex
--
Alex Balashov - Principal
Evariste Systems LLC
303 Perimeter Center North
Suite 300
Atlanta, GA 30346
United States
Tel: +1-678-954-0670
Web: http://www.evaristesys.com/
nathelper function set_contact_alias add brackets around contact's uri if the original one hadn't. This results in an invalid sip uri in the contact structure, which cause problems to other modules' functions when they try to parse the contact uri (e.g. save_pending() from ims_registrar_usrloc). This simple patch fix the issue simply mving the uri's boundaries.
You can view, comment on, or merge this pull request online at:
https://github.com/kamailio/kamailio/pull/154
-- Commit Summary --
* modules/nathelper: don't include enclosing bracket in contact uri in set_contact_alias
-- File Changes --
M modules/nathelper/nathelper.c (4)
-- Patch Links --
https://github.com/kamailio/kamailio/pull/154.patchhttps://github.com/kamailio/kamailio/pull/154.diff
---
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/pull/154
Hi guys,
I found this very useful function in `sdpops` module which I want to use
within my own module but I cannot find out how to forward its declaration
correctly so I can use it in my module.
Since `sdpops` has not defined it's functions within .h file I am just
simply forward declaring function I want to use in my .h file like this:
int sdp_remove_codecs_by_name(sip_msg_t* msg, str* codecs);
And then I just calling it within my function. I can compile it and linker
won't give me any errors but when I am running Kamailio it says that I have
error in config file which is obviously not true because if I comment code
which uses forward declared function kamailio runs without errors.
Kamailio log (when using forwarded declaration of sdpops function):
0(23096) DEBUG: <core> [route_struct.c:129]: mk_action(): ACTION_#2 #0/2:
3(3)/ 0x1
0(23096) DEBUG: <core> [route_struct.c:129]: mk_action(): ACTION_#2 #1/2:
3(3)/ 0x1
0(23096) DEBUG: <core> [route_struct.c:129]: mk_action(): ACTION_#16 #0/3:
22(16)/ 0x7f115fcfa898
0(23096) DEBUG: <core> [route_struct.c:129]: mk_action(): ACTION_#16 #1/3:
8(8)/ 0x7f115fcfaf98
0(23096) DEBUG: <core> [route_struct.c:129]: mk_action(): ACTION_#16 #2/3:
0(0)/ 0x7f1100000000
0(23096) DEBUG: <core> [route.c:129]: route_add(): mapping routing block
(0xa84a40)[MANAGE_FAILURE] to 1
ERROR: bad config file (1 errors)
0(23096) DEBUG: <core> [ppcfg.c:224]: pp_ifdef_level_check(): same number
of pairing preprocessor directives #!IF[N]DEF - #!ENDIF
0(23096) DEBUG: tm [t_funcs.c:85]: tm_shutdown(): DEBUG: tm_shutdown :
start
0(23096) DEBUG: tm [t_funcs.c:88]: tm_shutdown(): DEBUG: tm_shutdown :
emptying hash table
0(23096) DEBUG: tm [t_funcs.c:90]: tm_shutdown(): DEBUG: tm_shutdown :
removing semaphores
0(23096) DEBUG: tm [t_funcs.c:92]: tm_shutdown(): DEBUG: tm_shutdown :
destroying tmcb lists
0(23096) DEBUG: tm [t_funcs.c:95]: tm_shutdown(): DEBUG: tm_shutdown : done
0(23096) DEBUG: <core> [mem/shm_mem.c:242]: shm_mem_destroy():
shm_mem_destroy
0(23096) DEBUG: <core> [mem/shm_mem.c:245]: shm_mem_destroy(): destroying
the shared memory lock
Did anybody resolved this issue? All help is appreciated.
Hello,
so far nathelper was using id column (auto-incremented for mysql) from
database table location for deciding what records will be used for
sending keepalive requests on a timer execution.
Practically, it was like:
id % max_partitions = current_iteration
max_partitions is computed based on number of nat ping processes and nat
ping interval.
We changed the way we mark the records for sending keepalives, in order
to have proper indexing -- by adding the keepalive column.
Initially I thought that can be used for keeping the partition index as
well, but there is one thing that prevents it: pinging all contacts. So
no matter is behind nat or not, there is the option to ping the contact.
In this case, the natted state is no longer related to keepalive flag
and the keepalive column cannot be used.
The solution is to add a dedicated column for partition index. I already
implemented it in a separate branch, named tmp/usrloc-partion. The new
code is rather small, because is only about inserting into database, the
value is not used when having records in memory, therefore not needed to
be loaded back in memory or updated. But it changes the database
structure, by adding a new column to location table.
I think this is a fix and still ok to apply in this testing phase. There
is no new feature comparing with previous stable, but making it be the
same in functionality. However, I wanted to check with all devs and see
other opinions. And again, it is only about db_mode=DB_ONLY for usrloc.
The alternative will be to add back the selection based on modulo over
id column (that was based on a raw query), which makes it work only with
mysql and postgress (like it was so in the old versions), plus, it is
not suitable for indexing.
If nobody has anything against, I will merge it.
Cheers,
Daniel
--
Daniel-Constantin Mierla
http://twitter.com/#!/miconda - http://www.linkedin.com/in/miconda
Kamailio World Conference, May 27-29, 2015
Berlin, Germany - http://www.kamailioworld.com
4.3 rtpengine has new force_send_interface parameter. i noticed it
because rtpengine module generated several of these kind of messages at
startup:
May 8 11:06:43 siika /usr/bin/sip-proxy[26572]: INFO: rtpengine [rtpengine.c:455]: bind_force_send_ip(): force_send_ip_str not specified in .cfg file!
two questions:
1) is info level message really necessary? i would prefer debug level
message.
2) since there can be several (sets) of rtpengines running on the same
host as sip proxy, on private network, and on the internet, how can one
force_send_interface parameter be enough?
-- juha