Hello. I need parallel forking calls with the same username. (Call to all contacts with name for example User123), my endpoints may be WebSocket based and standart UDP endpoints. And I use rtpengine_manage for managing calls wor webphones and standart softh/hard phones.
I get all contacts manually and than at the branch route set rtpengine_manage settings for every call.
It works fine but it works for one kamailio server.
When I use 2 kamailio servers as load balansers Server that handle call get all endpoints from location but call to only one, that registred only at ths server
for example I call user123
I have 3 contacts
user123(a)1.2.3.4 - was registered at kamailio 1
user123(a)3.2.1.4 - was registered at kamailio 2
user123(a)4.3.2.1 - was registered at kamailio 1
So if call goes through kamailio 1 it call only to user123(a)1.2.3.4 and user123(a)4.3.2.1
I use this settings for usrloc at 2 kamailios to share all table between 2 servers
modparam("usrloc", "db_url", DBURL)
modparam("usrloc", "db_mode", 3)
modparam("usrloc", "user_column", "username")
modparam("usrloc", "contact_column", "contact")
modparam("usrloc", "expires_column", "expires")
modparam("usrloc", "q_column", "q")
modparam("usrloc", "callid_column", "callid")
modparam("usrloc", "cseq_column", "cseq")
modparam("usrloc", "methods_column", "methods")
modparam("usrloc", "cflags_column", "cflags")
modparam("usrloc", "user_agent_column", "user_agent")
modparam("usrloc", "received_column", "received")
modparam("usrloc", "socket_column", "socket")
modparam("usrloc", "path_column", "path")
modparam("usrloc", "ruid_column", "ruid")
modparam("usrloc", "instance_column", "instance")
modparam("usrloc", "use_domain", 1)
and this code for calling them
[GET_CONTACTS]
{
sql_query("ca", "select contact from location where username='$tU'", "ra");
xlog("rows: $dbr(ra=>rows) cols: $dbr(ra=>cols)\n");
if($dbr(ra=>rows)>0){
$var(i)=0;
while($var(i)<$dbr(ra=>rows)){
xlog("L_INFO","SQL query return contact {$dbr(ra=>[$var(i),0])} for {$tU} at step {$var(i)}\n");
if ($dbr(ra=>[$var(i),0])=~"transport=ws"){
xlog("L_INFO", "This is a Websocket call to endpoint");
sql_pvquery("ca", "select received from location where contact='$dbr(ra=>[$var(i),0])'","$var(recieved)");
$du=$var(recieved);
xlog("L_INFO","SQL query return recieved {$var(recieved)} for {$tU}. Destination is {$du}\n");
append_branch("sip:$tU@$(du{s.select,1,:})");
}
else
{
xlog("L_INFO", "This is a classic UDP call to endpoint");
$var(recieved)='';
sql_pvquery("ca", "select received from location where contact='$dbr(ra=>[$var(i),0])'","$var(recieved)");
xlog("L_INFO", "SQL query return RECIEVED {$var(recieved)}");
if ($var(recieved)==0){
xlog("L_INFO", "Recieved string is EMPTY");
$du="sip:"+$(dbr(ra=>[$var(i),0]){s.select,1,@});
}
else {
xlog("L_INFO", "Recieved string is {$var(recieved)}");
$du=$var(recieved);
}
$var(UDP_contact)="sip:"+$(dbr(ra=>[$var(i),0]){s.select,1,@});
append_branch("sip:$tU@$(du{s.select,1,:})");
xlog("L_INFO","Classic Destination URI is {$dbr(ra=>[$var(i),0])} for {$tU}}. Destination is {$du}\n");
}
$var(i) = $var(i) + 1;
}
}
t_on_branch("1");
return;
}
}
}
branch_route[1]{
if($du=~"transport=ws"){
xlog("L_INFO","Websocket Branch is {$du} for {$tU}\n");
rtpengine_manage("internal extenal force trust-address replace-origin replace-session-connection ICE=force RTP/SAVPF");
t_on_reply("REPLY_FROM_WS");
}
else{
xlog("L_INFO","UDP Branch is {$du)} for {$tU}\n");
rtpengine_manage("replace-origin replace-session-connection ICE=remove RTP/AVP");
t_on_reply("MANAGE_CLASSIC_REPLY");
}
}
When it try to branch endpoint without registration at server that handle call I get errors that tm module can not build Via header
*via_builder(): TCP/TLS connection (id: 0) for WebSocket could not be
found*
*ERROR: <core> [msg_translator.c:1725]: build_req_buf_from_sip_req():
could not create Via header*
*ERROR: <core> [forward.c:607]: forward_request(): ERROR:
forward_request: building failed*
UDP calls get errors something like above (sorry than can not share error code, This situation not often).
So I think I have this trouble because I use manually handling call and tried to substitute to lookup_branches function. but I have no Idea how to set rtpengine_manage paraments for each endpoint depending this is websocket or standart call.
IF there is write problem for callings thhrough 2 kamailios as load balansers please let me know about how to set rtpengine_manage parametrs wor endpoints for every fork. If not- can you tell me how I can call to all endpoints endepending of registration server (kamailio 1 or 2).
But with another side I can not understand why kamailio 2 don't see registrations at kamailio1 (or 1 from 2). May be this is trouble of usrloc module. That's why I write this problem here.
Thanks.
---
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/53
In order to get rid of any occurrence of ```/tmp``` in modules, introduce a ```RUN_DIR``` make option and every module should honor it
default could be ```/var/run/$(NAME)```
Related to #48
---
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/80
Reported by: Helmut Grohne <helmut(a)subdivi.de>
The kamailio package now installs /etc/kamailio/kamailio-basic.cfg which
can be selected via the CFGFILE= setting in /etc/default/kamailio. The
configuration contains:
```
modparam("mi_fifo", "fifo_name", "/tmp/kamailio_fifo")
```
This setting is insecure and may allow local users to elevate privileges
to the kamailio user.
The issue extends to kamailio-advanced.cfg. It seems that this is due to
an incomplete fix of #712083. Looking further, the state of /tmp file
vulnerabilities in kamailio looks worrisome. Most of the results of the
following command (to be executed in the kamailio source) are likely
vulnerable if executed:
```
grep '/tmp/[a-z0-9_.-]\+\(\$\$\)\?\([" ]\|$\)' -r .
```
Granted, some of the results are examples, documentation or obsolete.
But quite a few reach the default settings:
* kamcmd defaults to connecting to unixs:/tmp/kamailio_ctl.
* The kamailio build definitely is vulnerable as can be seen in
utils/kamctl/Makefile.
More research clearly is required here. Given these findings, the
security team may want to veto the inclusion of kamailio in a stable
release, which would be very unfortunate as kamailio is quite a unique
piece of software with little competitors in its field.
Helmut
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=775681
---
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/48
Hello,
following some discussions on github issues, mailing list and irc
recently, I am opening a topic and see if we can get something to
accommodate the needs out there at the best for everyone.
These are more in the context with the increase of sizes for
deployments, but also having the option to use a no-sql, some internal
bits of usrloc may need tunings.
The points to be considered:
1) discovering what is the server that wrote the record
2) efficiently getting the records that require nat keepalive
The two are sometime related, but not necessary always.
Victor Seva on irc channel was saying that is planning to propose a
patch for 1), relying on socket value in the location table. The use
case presented was many servers saving and looking up the records in the
same table, but fetch only the records written by current instance for
sending keepalives. It makes sense not send keepalives from all
instances, for load as well and bandwidth considerations. This could
work relying on local socket, but I am not sure how efficient will be
when listening on many sockets (different ips, but also different ports
or protocols - ipv4/6, tcp, tls, udp -- all add sockets).
On 2), it's being for a while on my list to review, as the module uses
custom sql queries (or even functions) to be able to match the the
records for sending keepalives -- it does matching with bitwise
operations to see if flags for nat are set. That doesn't work with
no-sql databases (including db_text, db_mongodb and I expect
db_cassandra). Even for sql, that kind of query is not efficient,
especially when dealing with a large amount of records.
The solutions that came in my mind for now:
- for 1) -- add the server_id as a new column in the location table. It
was pointed to be that ser location had it. This will turn a query
matching on many socket-representation strings to one expression done on
an integer. The value for server_id can be set via the core parameter
with the same name.
- for 2) -- add a new column 'keepalive' to be set internally by the
module to 1 if any of the flags for sending keepalive was set. Then the
query to fetch it will just use a match on it, rather that bitwise
operations in the match expression. Furthermore, this can be set to a
different value when enabling the keepalive partitioning (ie., sending
keepalives from different timer processes, each doing for a part of
natted users) - right now, for the db only mode, the selection is done
by column id % (number of keepalive processes).
Opinions, improvements, other proposals?
Cheers,
Daniel
--
Daniel-Constantin Mierla
http://twitter.com/#!/miconda - http://www.linkedin.com/in/miconda
Kamailio World Conference, May 27-29, 2015
Berlin, Germany - http://www.kamailioworld.com
Following #48 and #80
You can view, comment on, or merge this pull request online at:
https://github.com/kamailio/kamailio/pull/89
-- Commit Summary --
* ctl: use RUN_DIR env to set DEFAULT_CTL_SOCKET
* etc: set ctl "binrpc" to new /var/run/kamailio default value
* utils/kamctl: change fifo default path to /var/run/kamailio
* etc: change fifo default to /var/run/kamailio
-- File Changes --
M etc/kamailio-basic.cfg (4)
M etc/kamailio-oob.cfg (4)
M etc/kamailio.cfg (4)
M modules/ctl/ctl_defaults.h (4)
M utils/kamctl/kamctl.fifo (2)
M utils/kamctl/kamctlrc (4)
-- Patch Links --
https://github.com/kamailio/kamailio/pull/89.patchhttps://github.com/kamailio/kamailio/pull/89.diff
---
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/pull/89
Hi All,
I have been experiencing a deadlock when a timeout occurs on a t_relayed()
INVITE. Going through the code I have noticed a possible chance of deadlock
(without re-entrant enabled). Here is my thinking:
t_should_relay_response() is called with REPLY_LOCK when the timer process
fires on the fr_inv_timer (no response from the INVITE that was relayed,
other than 100 provisional) and a 408 is generated. However, from within
that function there are calls to run_failure_handlers() which in turn
*could* try and lock the reply (viz. somebody having a t_reply() call in
the cfg file - in failure route block). This would result in another lock
on the same transaction's REPLY_LOCK....
Has anybody else experienced something like this?
this is on master btw.
Cheers
Jason
If Kamailio is globally configured to send offline notification replies using `modparam("msilo", "from_address", "sip:$rU@example.com")`, there is currently no way to disable the offline notification reply during script processing.
For example, a scenario where you might want to store the original MESSAGE but not send the offline notification reply is when you are also using the IMC module. When `user(a)example.com` is part of an IMC chat, but goes offline for some reason, the MSILO module will store original MESSAGE, then generate the offline notification reply back to the IMC chat, which generates another MESSAGE with `user(a)example.com` as a recipient... This instantly leads to thousands of MESSAGE generations.
I am thinking that it's a nice feature to have offline notification replies enabled when `modparam("msilo", "from_address", "sip:$rU@example.com")` is defined, but that the MSILO module could check the existence (nor non-existence) of a flag to determine whether or not it would generate an offline notification reply, so the logic would be something like:
```
#!define FLT_MSILO_DISABLE_OFFLINE_REPLY 13
modparam("msilo", "from_address", "sip:$rU@example.com")
modparam("msilo", "disable_offline_reply_flag", FLT_MSILO_DISABLE_OFFLINE_REPLY)
```
Then m_store() checks that `from_address` is valid and that `disable_offline_reply_flag` is not set.
---
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/61
Trying to load my server in the background, the main process core dumps at init of db_sqlite
Git head, os/x
(gdb) bt full
#0 0x00007fff8610c66a in _dispatch_barrier_async_f_slow ()
No symbol table info available.
#1 0x00007fff8acd13bd in sqlite3_initialize ()
No symbol table info available.
#2 0x0000000104b75ba3 in sqlite_mod_init () at db_sqlite.c:69
No locals.
#3 0x00000001033bc93c in init_mod (m=0x103809b80) at sr_module.c:943
No locals.
#4 0x00000001033bc414 in init_mod (m=0x103809e58) at sr_module.c:940
No locals.
---
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/43
My websocket TLS server is full of these kinds of messages:
Jan 18 18:10:26 ws0 /usr/sbin/kamailio[19701]: NOTICE: <script>:
http:217.120.x.x:55386: WS connection closed
...
Jan 18 18:10:26 ws0 /usr/sbin/kamailio[19689]: WARNING: <core>
[msg_translator.c:2506]: via_builder(): TCP/TLS connection (id: 0) for WebSocket could not be found
Jan 18 18:10:26 ws0 /usr/sbin/kamailio[19689]: ERROR: <core>
[msg_translator.c:1722]: build_req_buf_from_sip_req(): could not create Via header
Jan 18 18:10:26 ws0 /usr/sbin/kamailio[19689]: ERROR: tm
[t_fwd.c:527]: prepare_new_uac(): could not build request
Jan 18 18:10:26 ws0 /usr/sbin/kamailio[19689]: ERROR: tm
[t_fwd.c:1773]: t_forward_nonack(): ERROR: t_forward_nonack: failure to add branches
Jan 18 18:10:26 ws0 /usr/sbin/kamailio[19689]: ERROR: sl
[sl_funcs.c:387]: sl_reply_error(): ERROR: sl_reply_error used: No error (2/SL)
(repeat these last errors for a bunch of attempted NOTIFY forwards)
The route block does basically something like this:
# add_contact_alias(); # only for requests from the outside
loose_route();
if (!t_relay()) {
sl_reply_error();
}
The problem arises here:
}else if (send_info->proto==PROTO_WS){
...
con = tcpconn_get(send_info->id, &ip, port, from, 0)
...
if (con == NULL) {
LM_WARN("TCP/TLS connection (id: %d) for WebSocket could not be found\n", send_info->id);
The NULL failure status gets returned up to `prepare_new_uac` in `t_fwd.c`:
shbuf=build_req_buf_from_sip_req( i_req, &len, dst, BUILD_IN_SHM);
if (!shbuf) {
LM_ERR("could not build request\n");
ret=E_OUT_OF_MEM;
goto error01;
}
At this point, ser_error will become `E_OUT_OF_MEM` while it should be something like `E_SEND`.
And `E_OUT_OF_MEM` gets translated to `500 No Error` because we're not running in DEBUG mode.
What causes the connection to drop in the first place, you ask?
18:10:18.690738 IP 217.120.x.x.55386 > 195.35.x.x.443: Flags [S], seq 1323983240, win 8192, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0
18:10:18.690863 IP 195.35.x.x.443 > 217.120.x.x.55386: Flags [S.], seq 4077761781, ack 1323983241, win 14600, options [mss 1460,nop,nop,sackOK,nop,wscale 3], length 0
18:10:18.710846 IP 217.120.x.x.55386 > 195.35.x.x.443: Flags [.], ack 1, win 256, length 0
18:10:18.808751 IP 217.120.x.x.55386 > 195.35.x.x.443: Flags [P.], seq 1:246, ack 1, win 256, length 245
...
18:10:19.233415 IP 195.35.x.x.443 > 217.120.x.x.55386: Flags [.], ack 31348, win 5126, length 0
18:10:26.489764 IP 217.120.x.x.55386 > 195.35.x.x.443: Flags [P.], seq 31348:32473, ack 34578, win 255, length 1125
...
18:10:26.501409 IP 195.35.x.x.443 > 217.120.x.x.55386: Flags [P.], seq 42255:42916, ack 46010, win 5046, length 661
18:10:26.527755 IP 217.120.x.x.55386 > 195.35.x.x.443: Flags [.], ack 36993, win 252, length 0
18:10:26.527860 IP 195.35.x.x.443 > 217.120.x.x.55386: Flags [.], seq 42916:47296, ack 46010, win 5278, length 4380
18:10:26.527888 IP 195.35.x.x.443 > 217.120.x.x.55386: Flags [FP.], seq 47296:48663, ack 46010, win 5278, length 1367
18:10:26.529179 IP 217.120.x.x.55386 > 195.35.x.x.443: Flags [.], ack 40501, win 254, length 0
18:10:26.529200 IP 217.120.x.x.55386 > 195.35.x.x.443: Flags [.], ack 42916, win 251, length 0
18:10:26.547276 IP 217.120.x.x.55386 > 195.35.x.x.443: Flags [.], ack 48664, win 251, length 0
18:10:26.549712 IP 217.120.x.x.55386 > 195.35.x.x.443: Flags [F.], seq 46010, ack 48664, win 251, length 0
18:10:26.549750 IP 195.35.x.x.443 > 217.120.x.x.55386: Flags [.], ack 46011, win 5278, length 0
Where you see that the FIN is initiated by `195.35.x.x` which is the Kamailio websocket server.
The cause (probably) is the WS client closing the connection. In this case after re-subscribing with Expires:0. The presence server attempts to reply with a bunch of NOTIFYs with `Subscription-State: terminated;reason=timeout` but they bounce on the broken connection. If Kamailio would return a nice "477 Unfortunately error on sending to next hop occurred" it'd be prettier.
Getting less "error" messages (a total of 6 *per* expired/unsubscribed subscription) after this error --which is apparently very common -- would be beneficial too.
As for fixing:
- We could change the `via_builder` to set `ser_error` (and check that in `build_req_buf_from_sip_req`), or
- add error-code-out-parameters to all calls from `build_req_buf_from_sip_req` and down.
I'm not sure if either is the best way.
As for the excessive error reporting, would looking at `ser_error` before printing (another) error be an acceptable fix?
Cheers,
Walter Doekes
OSSO B.V.
---
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/47
This patch add the possibility to specify the src ip addr we want to use when connecting to a peer.
Useful when we have several ip addresses in the same machine and we want diameter traffic goimg out from a specific one.
You can view, comment on, or merge this pull request online at:
https://github.com/kamailio/kamailio/pull/45
-- Commit Summary --
* modules/cdp: added src_addr parameter in peer definition
-- File Changes --
M modules/cdp/config.h (1)
M modules/cdp/configexample/ConfigExample.xml (2)
M modules/cdp/configparser.c (9)
M modules/cdp/peer.c (5)
M modules/cdp/peer.h (23)
M modules/cdp/peermanager.c (6)
M modules/cdp/receiver.c (21)
-- Patch Links --
https://github.com/kamailio/kamailio/pull/45.patchhttps://github.com/kamailio/kamailio/pull/45.diff
---
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/pull/45