I finally got everything working but see to consistently receive the following 3 errors (see below) over and over when a call is being collected?
Versions:
Homer 3.6
kamailio 4.3.0-dev4
Captagent 4.2.0
Any assistance would be appreciated.
Errors:
Mar 8 22:04:47 ce-homer2 /usr/local/kamailio/sbin/kamailio[9921]: ERROR: <core> [parser/parse_fline.c:257]: parse_first_line(): parse_first_line: bad message (offset: 0)
Mar 8 22:04:47 ce-homer2 /usr/local/kamailio/sbin/kamailio[9921]: ERROR: <core> [parser/msg_parser.c:688]: parse_msg(): ERROR: parse_msg: message=<HEP3#004>>
Mar 8 22:04:47 ce-homer2 /usr/local/kamailio/sbin/kamailio[9921]: ERROR: <core> [receive.c:129]: receive_msg(): core parsing of SIP message failed (127.0.0.1:33264/1)
Mar 8 22:04:47 ce-homer2 /usr/local/kamailio/sbin/kamailio[9922]: ERROR: <core> [parser/parse_fline.c:257]: parse_first_line(): parse_first_line: bad message (offset: 0)
Mar 8 22:04:47 ce-homer2 /usr/local/kamailio/sbin/kamailio[9922]: ERROR: <core> [parser/msg_parser.c:688]: parse_msg(): ERROR: parse_msg: message=<HEP3#002Ü>
Mar 8 22:04:47 ce-homer2 /usr/local/kamailio/sbin/kamailio[9922]: ERROR: <core> [receive.c:129]: receive_msg(): core parsing of SIP message failed (127.0.0.1:33264/1)
---
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/107
Hello. I need parallel forking calls with the same username. (Call to all contacts with name for example User123), my endpoints may be WebSocket based and standart UDP endpoints. And I use rtpengine_manage for managing calls wor webphones and standart softh/hard phones.
I get all contacts manually and than at the branch route set rtpengine_manage settings for every call.
It works fine but it works for one kamailio server.
When I use 2 kamailio servers as load balansers Server that handle call get all endpoints from location but call to only one, that registred only at ths server
for example I call user123
I have 3 contacts
user123(a)1.2.3.4 - was registered at kamailio 1
user123(a)3.2.1.4 - was registered at kamailio 2
user123(a)4.3.2.1 - was registered at kamailio 1
So if call goes through kamailio 1 it call only to user123(a)1.2.3.4 and user123(a)4.3.2.1
I use this settings for usrloc at 2 kamailios to share all table between 2 servers
modparam("usrloc", "db_url", DBURL)
modparam("usrloc", "db_mode", 3)
modparam("usrloc", "user_column", "username")
modparam("usrloc", "contact_column", "contact")
modparam("usrloc", "expires_column", "expires")
modparam("usrloc", "q_column", "q")
modparam("usrloc", "callid_column", "callid")
modparam("usrloc", "cseq_column", "cseq")
modparam("usrloc", "methods_column", "methods")
modparam("usrloc", "cflags_column", "cflags")
modparam("usrloc", "user_agent_column", "user_agent")
modparam("usrloc", "received_column", "received")
modparam("usrloc", "socket_column", "socket")
modparam("usrloc", "path_column", "path")
modparam("usrloc", "ruid_column", "ruid")
modparam("usrloc", "instance_column", "instance")
modparam("usrloc", "use_domain", 1)
and this code for calling them
[GET_CONTACTS]
{
sql_query("ca", "select contact from location where username='$tU'", "ra");
xlog("rows: $dbr(ra=>rows) cols: $dbr(ra=>cols)\n");
if($dbr(ra=>rows)>0){
$var(i)=0;
while($var(i)<$dbr(ra=>rows)){
xlog("L_INFO","SQL query return contact {$dbr(ra=>[$var(i),0])} for {$tU} at step {$var(i)}\n");
if ($dbr(ra=>[$var(i),0])=~"transport=ws"){
xlog("L_INFO", "This is a Websocket call to endpoint");
sql_pvquery("ca", "select received from location where contact='$dbr(ra=>[$var(i),0])'","$var(recieved)");
$du=$var(recieved);
xlog("L_INFO","SQL query return recieved {$var(recieved)} for {$tU}. Destination is {$du}\n");
append_branch("sip:$tU@$(du{s.select,1,:})");
}
else
{
xlog("L_INFO", "This is a classic UDP call to endpoint");
$var(recieved)='';
sql_pvquery("ca", "select received from location where contact='$dbr(ra=>[$var(i),0])'","$var(recieved)");
xlog("L_INFO", "SQL query return RECIEVED {$var(recieved)}");
if ($var(recieved)==0){
xlog("L_INFO", "Recieved string is EMPTY");
$du="sip:"+$(dbr(ra=>[$var(i),0]){s.select,1,@});
}
else {
xlog("L_INFO", "Recieved string is {$var(recieved)}");
$du=$var(recieved);
}
$var(UDP_contact)="sip:"+$(dbr(ra=>[$var(i),0]){s.select,1,@});
append_branch("sip:$tU@$(du{s.select,1,:})");
xlog("L_INFO","Classic Destination URI is {$dbr(ra=>[$var(i),0])} for {$tU}}. Destination is {$du}\n");
}
$var(i) = $var(i) + 1;
}
}
t_on_branch("1");
return;
}
}
}
branch_route[1]{
if($du=~"transport=ws"){
xlog("L_INFO","Websocket Branch is {$du} for {$tU}\n");
rtpengine_manage("internal extenal force trust-address replace-origin replace-session-connection ICE=force RTP/SAVPF");
t_on_reply("REPLY_FROM_WS");
}
else{
xlog("L_INFO","UDP Branch is {$du)} for {$tU}\n");
rtpengine_manage("replace-origin replace-session-connection ICE=remove RTP/AVP");
t_on_reply("MANAGE_CLASSIC_REPLY");
}
}
When it try to branch endpoint without registration at server that handle call I get errors that tm module can not build Via header
*via_builder(): TCP/TLS connection (id: 0) for WebSocket could not be
found*
*ERROR: <core> [msg_translator.c:1725]: build_req_buf_from_sip_req():
could not create Via header*
*ERROR: <core> [forward.c:607]: forward_request(): ERROR:
forward_request: building failed*
UDP calls get errors something like above (sorry than can not share error code, This situation not often).
So I think I have this trouble because I use manually handling call and tried to substitute to lookup_branches function. but I have no Idea how to set rtpengine_manage paraments for each endpoint depending this is websocket or standart call.
IF there is write problem for callings thhrough 2 kamailios as load balansers please let me know about how to set rtpengine_manage parametrs wor endpoints for every fork. If not- can you tell me how I can call to all endpoints endepending of registration server (kamailio 1 or 2).
But with another side I can not understand why kamailio 2 don't see registrations at kamailio1 (or 1 from 2). May be this is trouble of usrloc module. That's why I write this problem here.
Thanks.
---
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/53
In order to get rid of any occurrence of ```/tmp``` in modules, introduce a ```RUN_DIR``` make option and every module should honor it
default could be ```/var/run/$(NAME)```
Related to #48
---
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/80
Reported by: Helmut Grohne <helmut(a)subdivi.de>
The kamailio package now installs /etc/kamailio/kamailio-basic.cfg which
can be selected via the CFGFILE= setting in /etc/default/kamailio. The
configuration contains:
```
modparam("mi_fifo", "fifo_name", "/tmp/kamailio_fifo")
```
This setting is insecure and may allow local users to elevate privileges
to the kamailio user.
The issue extends to kamailio-advanced.cfg. It seems that this is due to
an incomplete fix of #712083. Looking further, the state of /tmp file
vulnerabilities in kamailio looks worrisome. Most of the results of the
following command (to be executed in the kamailio source) are likely
vulnerable if executed:
```
grep '/tmp/[a-z0-9_.-]\+\(\$\$\)\?\([" ]\|$\)' -r .
```
Granted, some of the results are examples, documentation or obsolete.
But quite a few reach the default settings:
* kamcmd defaults to connecting to unixs:/tmp/kamailio_ctl.
* The kamailio build definitely is vulnerable as can be seen in
utils/kamctl/Makefile.
More research clearly is required here. Given these findings, the
security team may want to veto the inclusion of kamailio in a stable
release, which would be very unfortunate as kamailio is quite a unique
piece of software with little competitors in its field.
Helmut
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=775681
---
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/48
Running latest 4.2 with usrloc in db_mode=3. It seems "kamctl stats usrloc" or "kamctl ul show" always shows 0 registered users even when there are registered users (location table has entries).
This looks like a bug?
---
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/113
Hello,
I'm running kamailio:4.2:df86f2a9a09339687af5914b85fe8bd8f8f1f575 and am
getting a periodic crash, once every few days, in response to a CANCEL
message.
The basic back trace is like this:
(gdb) where
#0 0x000000000044fed2 in del_nonshm_lump (lump_list=0x7f48f5681440)
at data_lump.c:677
#1 0x00007f48f5404a15 in free_faked_req (faked_req=0x7f48f5680ea0,
t=0x7f46ee6faeb0) at t_reply.c:975
#2 0x00007f48f5405bdf in run_failure_handlers (t=0x7f46ee6faeb0,
rpl=0xffffffffffffffff, code=487, extra_flags=0) at t_reply.c:1061
#3 0x00007f48f54084e4 in t_should_relay_response (Trans=0x7f46ee6faeb0,
new_code=487, branch=0, should_store=0x7fffce5605e0,
should_relay=0x7fffce5605e4, cancel_data=0x7fffce5606b0,
reply=0xffffffffffffffff) at t_reply.c:1406
#4 0x00007f48f540b045 in relay_reply (t=0x7f46ee6faeb0,
p_msg=0xffffffffffffffff, branch=0, msg_status=487,
cancel_data=0x7fffce5606b0, do_put_on_wait=1) at t_reply.c:1809
#5 0x00007f48f5386832 in cancel_branch (t=0x7f46ee6faeb0, branch=0,
reason=0x0, flags=10) at t_cancel.c:276
#6 0x00007f48f53aff4a in e2e_cancel (cancel_msg=0x7f48f68d69d8,
t_cancel=0x7f46ee8d9c30, t_invite=0x7f46ee6faeb0) at t_fwd.c:1373
#7 0x00007f48f53b4bd0 in t_relay_cancel (p_msg=0x7f48f68d69d8) at
t_fwd.c:1967
#8 0x00007f48f53deaa7 in w_t_relay_cancel (p_msg=0x7f48f68d69d8, _foo=0x0,
_bar=0x0) at tm.c:1743
#9 0x000000000041d364 in do_action (h=0x7fffce560fc0, a=0x7f48f6689f70,
msg=0x7f48f68d69d8) at action.c:1088
#10 0x0000000000429a7a in run_actions (h=0x7fffce560fc0, a=0x7f48f6689f70,
msg=0x7f48f68d69d8) at action.c:1583
#11 0x000000000042a0df in run_actions_safe (h=0x7fffce5622b0,
a=0x7f48f6689f70, msg=0x7f48f68d69d8) at action.c:1648
#12 0x0000000000541158 in rval_get_int (h=0x7fffce5622b0,
msg=0x7f48f68d69d8,
i=0x7fffce561498, rv=0x7f48f668a1e0, cache=0x0) at rvalue.c:924
#13 0x0000000000545390 in rval_expr_eval_int (h=0x7fffce5622b0,
msg=0x7f48f68d69d8, res=0x7fffce561498, rve=0x7f48f668a1d8)
at rvalue.c:1918
#14 0x0000000000545786 in rval_expr_eval_int (h=0x7fffce5622b0,
msg=0x7f48f68d69d8, res=0x7fffce561920, rve=0x7f48f668a948)
at rvalue.c:1926
#15 0x000000000041ce4e in do_action (h=0x7fffce5622b0, a=0x7f48f668c260,
msg=0x7f48f68d69d8) at action.c:1064
#16 0x0000000000429a7a in run_actions (h=0x7fffce5622b0, a=0x7f48f6689808,
msg=0x7f48f68d69d8) at action.c:1583
#17 0x000000000041d2cd in do_action (h=0x7fffce5622b0, a=0x7f48f668c960,
msg=0x7f48f68d69d8) at action.c:1079
#18 0x0000000000429a7a in run_actions (h=0x7fffce5622b0, a=0x7f48f667c628,
msg=0x7f48f68d69d8) at action.c:1583
#19 0x000000000042a1a7 in run_top_route (a=0x7f48f667c628,
msg=0x7f48f68d69d8,
c=0x0) at action.c:1669
#20 0x0000000000507e1a in receive_msg (
buf=0xa6f760 "CANCEL sip:yyy@xxx:5060 SIP/2.0\r\nVia: SIP
/2.0/UDP xxx:5060;branch=z9hG4bK08f.3dc6f0e1.0\r\nFrom: \"yyy\"
<sip:yyy@xxx>;tag=D78eD8FB3SDgc\r\nCall-ID:
e5e48a99-48dd-1233-96b7-782bcb13da6a\r\nTo: <sip:xxx@xxx:5060>\r\nCSeq:
73049624 CANCEL\r\nMax-Forwards: 32\r\nUser-Agent: OpenSIPS (1.9.1-notls
(x86_64/linux))\r\nContent-Length: 0\r\n\r\n", len=394,
rcv_info=0x7fffce5625a0)
at receive.c:216
#21 0x00000000006074ae in udp_rcv_loop () at udp_server.c:521
#22 0x00000000004a5f0b in main_loop () at main.c:1629
#23 0x00000000004ab8bf in main (argc=11, argv=0x7fffce5629c8) at main.c:2578
I'll send a 'thread apply all bt full' privately due to the amount of
private addresses in there, but a quick glance suggests a possible
problem is here:
#5 0x00007f48f5386832 in cancel_branch (t=0x7f46ee6faeb0, branch=0,
reason=0x0, flags=10) at t_cancel.c:276
cancel = 0x1 <Address 0x1 out of bounds>
len = 32584
crb = 0x7f46ee6fb0b0
irb = 0x7f46ee6fb028
ret = 1
tmp_cd = {cancel_bitmap = 0, reason = {cause = 0, u = {text = {
s = 0x0, len = 0}, e2e_cancel = 0x0, packed_hdrs = {s =
0x0,
len = 0}}}}
pcbuf = 0x7f46ee6fb0c0
__FUNCTION__ = "cancel_branch"
#6 0x00007f48f53aff4a in e2e_cancel (cancel_msg=0x7f48f68d69d8,
t_cancel=0x7f46ee8d9c30, t_invite=0x7f46ee6faeb0) at t_fwd.c:1373
cancel_bm = 1
reason = 0x0
free_reason = 0
i = 0
lowest_error = 0
ret = 32584
tmcb = {req = 0x137f66ce710, rpl = 0x7f48f68d69d8, param =
0xf48ab828,
code = -158504488, flags = 32584, branch = 0,
t_rbuf = 0xf80f668f9a0, dst = 0xce5622b0, send_buf = {
s = 0x1ffffffff <Address 0x1ffffffff out of bounds>,
len = -304107664}}
__FUNCTION__ = "e2e_cancel"
#7 0x00007f48f53b4bd0 in t_relay_cancel (p_msg=0x7f48f68d69d8) at
t_fwd.c:1967
t_invite = 0x7f46ee6faeb0
t = 0x7f46ee8d9c30
ret = -323705680
new_tran = 1
Thanks,
-- Alex
--
Alex Balashov - Principal
Evariste Systems LLC
303 Perimeter Center North
Suite 300
Atlanta, GA 30346
United States
Tel: +1-678-954-0670
Web: http://www.evaristesys.com/
Hello,
it is time to nail down the roadmap to the next major release. We
discussed during the last IRC devel meeting, proposing to get it out by
beginning of June. Given we need to have at least one month of testing,
I propose the next milestones:
- freezing the development: Wednesday, April 22, 2015
- if testing goes smooth, then branching 4.3 after about one month:
During the week starting May 18
- test more in beta phase, prepare packaging, etc. and release after 2-3
weeks: One of the days between June 4 and 11
Other suggestions or adjustments are welcome! Send them to mailing list
to discuss further.
Cheers,
Daniel
--
Daniel-Constantin Mierla
http://twitter.com/#!/miconda - http://www.linkedin.com/in/miconda
Kamailio World Conference, May 27-29, 2015
Berlin, Germany - http://www.kamailioworld.com
Hello,
following some discussions on github issues, mailing list and irc
recently, I am opening a topic and see if we can get something to
accommodate the needs out there at the best for everyone.
These are more in the context with the increase of sizes for
deployments, but also having the option to use a no-sql, some internal
bits of usrloc may need tunings.
The points to be considered:
1) discovering what is the server that wrote the record
2) efficiently getting the records that require nat keepalive
The two are sometime related, but not necessary always.
Victor Seva on irc channel was saying that is planning to propose a
patch for 1), relying on socket value in the location table. The use
case presented was many servers saving and looking up the records in the
same table, but fetch only the records written by current instance for
sending keepalives. It makes sense not send keepalives from all
instances, for load as well and bandwidth considerations. This could
work relying on local socket, but I am not sure how efficient will be
when listening on many sockets (different ips, but also different ports
or protocols - ipv4/6, tcp, tls, udp -- all add sockets).
On 2), it's being for a while on my list to review, as the module uses
custom sql queries (or even functions) to be able to match the the
records for sending keepalives -- it does matching with bitwise
operations to see if flags for nat are set. That doesn't work with
no-sql databases (including db_text, db_mongodb and I expect
db_cassandra). Even for sql, that kind of query is not efficient,
especially when dealing with a large amount of records.
The solutions that came in my mind for now:
- for 1) -- add the server_id as a new column in the location table. It
was pointed to be that ser location had it. This will turn a query
matching on many socket-representation strings to one expression done on
an integer. The value for server_id can be set via the core parameter
with the same name.
- for 2) -- add a new column 'keepalive' to be set internally by the
module to 1 if any of the flags for sending keepalive was set. Then the
query to fetch it will just use a match on it, rather that bitwise
operations in the match expression. Furthermore, this can be set to a
different value when enabling the keepalive partitioning (ie., sending
keepalives from different timer processes, each doing for a part of
natted users) - right now, for the db only mode, the selection is done
by column id % (number of keepalive processes).
Opinions, improvements, other proposals?
Cheers,
Daniel
--
Daniel-Constantin Mierla
http://twitter.com/#!/miconda - http://www.linkedin.com/in/miconda
Kamailio World Conference, May 27-29, 2015
Berlin, Germany - http://www.kamailioworld.com