Hello!
Is it possible to find out the name of the onreply_route that was set
before?
Something like this:
t_on_reply("MANAGE_REPLY");
...
if ( t_is_set("onreply_route") ) {
get_onreply_route_name();
...
}
--
BR,
Denys Pozniak
I have kamailio behind a TLS termination proxy so the sockets are correctly
deduced to be TCP. However the clients only talk TLS to the proxy and are
confused when the top Via header added by Kamailio is TCP. Is there a way
for Kamailio to forcibly pretend its protocol is TLS? Like
advertised_address but "advertised_protocol" instead.
(With pjsip testing: it has a flag use_tls which ignores TCP from Kamailio
and continues to use the persistent TLS transport to proxy. Linphone fails
because it tries to honor TCP in Via and is unable to establish TCP
transport).
BTW I am using t_relay_to_tcp so Kamailio will return traffic to the proxy
as TCP even though the contact addresses specify transport=TLS.
Hi everybody,
I'm just testing Kamailio 5.4.1 with dialog replication over DMQ. This
seems to work very good. Dialogs are replicated without problems.
When I'm restarting one node I would have expected, that all dialogs are
synced again, just like in dmq_usrloc.
But this does not happen. After a restart the nodes dialog-list is empty.
Did I miss somethin? Is there a special parameter that I have to set?
BR, Björn
--
Björn Klasen, Specialist
TNG Stadtnetz GmbH, Network Management (VoIP)
Projensdorfer Straße 324
24106 Kiel
Germany
T +49 431/ 530530
F +49 431/ 7097-555
mailto: bklasen(a)tng.de
http://www.tng.de
Register: Amtsgericht Kiel HRB 6002 KI
Executive board (Geschäftsführung): Dr.-Ing. Volkmar Hausberg,
Sven Schade, Carsten Tolkmit, Dr. Sven Willert
Tax-Id (Steuernr.): 2029047020, VAT-Id (USt-Id): DE225201428
Hi,
I have still the problem that my htables are not written into a sqlite
db when I shutdown kamailio.
Manual writing via:
kamcmd htable.store ...
is working.
As I can see in the logs with highest loglevel, the function destroy()
is never called.
ht_db_url.len is > 0
ht_db_init_con() should work, else nothing would work with the db.
But I can not see the debug messages from ht_db_open_con()
Any ideas?
Best regards,
Bernd
Hello,
I am looking information about Kamailio operating as an SBC.
I saw this video of Kamailio World 2021: Kamailio As An SBC For Network Segregation:
https://www.youtube.com/watch?v=UW6l3R4OnsY&t=1381s
But where I can find more detailed technical information ?
Regards
Hello,
In reference to this issue https://github.com/kamailio/kamailio/issues/3081 I've been advised to set http_reply_parse to "no" which I did (and it is the default value anyways).
# grep http_reply_parse /etc/kamailio/kamailio.cfg
http_reply_parse=no
After setting this, I still get these parsing errors:
{ "idx": 23, "pid": 102, "level": "ERROR", "module": "core", "file": "core/parser/msg_parser.c", "line": 748, "function": "parse_msg", "logprefix": "", "message": "ERROR: parse_msg: message=<HTTP/1.1 100 Continue\r\n\r\nHTTP/1.1 200 OK\r\ndate: Wed, 26 Jul 2023 16:43:19 GMT\r\ncontent-length: 224\r\ncontent-type: text/plain; charset=utf-8\r\n\r\n{\"result\": {\"ruri\":\"\",\"fU\":\"\",\"tU\":\"\",\"privacy\":\"\",\"identity\":\"\",\"error\":{\"code\":0,\"message\":\"\"},\"mb\":\"\",\"Headers\":null,\"encrypted\":\"\",\"sipsScheme\":\"\",\"attestation\":\"\",\"authUser\":\"\",\"authPassword\":\"\",\"disableStirShaken\":\"\"}}>" }
Is there anything that I am missing beyond setting that parameter?
Thank you,
Alexandru
Hello!
We had several crashs of Kamailio (5.3.4) in the last weeks. Each time, the
last logs are:
/usr/local/sbin/kamailio[20942]: CRITICAL: <core> [core/forward.c:347]:
get_send_socket2(): unsupported proto 0 (*)
/usr/local/sbin/kamailio[20966]: CRITICAL: <core> [core/forward.c:347]:
get_send_socket2(): unsupported proto 111 (unknown)
When we look to the coredump, here below the first traces:
(gdb) bt
#0 0x00007f70131b3d91 in prepare_new_uac (t=0x7f6fe1d45340,
i_req=0x7f7016fbe208, branch=0, uri=0x7fff37142fa0, path=0x7fff37142f80,
next_hop=0x7f7016fbe480, fsocket=0x0,
snd_flags=..., fproto=0, flags=2, instance=0x7fff37142f70,
ruid=0x7fff37142f60, location_ua=0x7fff37142f50) at t_fwd.c:463
#1 0x00007f70131b7c42 in add_uac (t=0x7f6fe1d45340, request=0x7f7016fbe208,
uri=0x7f7016fbe480, next_hop=0x7f7016fbe480, path=0x7f7016fbe848, proxy=0x0,
fsocket=0x0,
snd_flags=..., proto=0, flags=2, instance=0x7f7016fbe858,
ruid=0x7f7016fbe870, location_ua=0x7f7016fbe880) at t_fwd.c:805
#2 0x00007f70131bfebd in t_forward_nonack (t=0x7f6fe1d45340,
p_msg=0x7f7016fbe208, proxy=0x0, proto=0) at t_fwd.c:1667
#3 0x00007f701316cf44 in t_relay_to (p_msg=0x7f7016fbe208, proxy=0x0,
proto=0, replicate=0) at t_funcs.c:332
#4 0x00007f70131a3a11 in _w_t_relay_to (p_msg=0x7f7016fbe208, proxy=0x0,
force_proto=0) at tm.c:1689
#5 0x00007f70131a4c51 in w_t_relay (p_msg=0x7f7016fbe208, _foo=0x0,
_bar=0x0) at tm.c:1889
#6 0x00000000005a1f57 in do_action (h=0x7fff37143d90, a=0x7f7016bd1b10,
msg=0x7f7016fbe208) at core/action.c:1071
#7 0x00000000005aeb1e in run_actions (h=0x7fff37143d90, a=0x7f7016bd1b10,
msg=0x7f7016fbe208) at core/action.c:1576
#8 0x00000000005af1df in run_actions_safe (h=0x7fff37146e90,
a=0x7f7016bd1b10, msg=0x7f7016fbe208) at core/action.c:1640
#9 0x000000000066aa50 in rval_get_int (h=0x7fff37146e90,
msg=0x7f7016fbe208, i=0x7fff37144238, rv=0x7f7016bd1e60, cache=0x0) at
core/rvalue.c:915
#10 0x000000000066f000 in rval_expr_eval_int (h=0x7fff37146e90,
msg=0x7f7016fbe208, res=0x7fff37144238, rve=0x7f7016bd1e58) at
core/rvalue.c:1913
#11 0x000000000066f453 in rval_expr_eval_int (h=0x7fff37146e90,
msg=0x7f7016fbe208, res=0x7fff371446ec, rve=0x7f7016bd2588) at
core/rvalue.c:1921
#12 0x00000000005a1a1d in do_action (h=0x7fff37146e90, a=0x7f7016bd2e08,
msg=0x7f7016fbe208) at core/action.c:1047
#13 0x00000000005aeb1e in run_actions (h=0x7fff37146e90, a=0x7f7016bcbd20,
msg=0x7f7016fbe208) at core/action.c:1576
#14 0x000000000059e97d in do_action (h=0x7fff37146e90, a=0x7f7016ef19b8,
msg=0x7f7016fbe208) at core/action.c:695
#15 0x00000000005aeb1e in run_actions (h=0x7fff37146e90, a=0x7f7016e3f578,
msg=0x7f7016fbe208) at core/action.c:1576
#16 0x000000000059e97d in do_action (h=0x7fff37146e90, a=0x7f7016c220a0,
msg=0x7f7016fbe208) at core/action.c:695
#17 0x00000000005aeb1e in run_actions (h=0x7fff37146e90, a=0x7f7016c220a0,
msg=0x7f7016fbe208) at core/action.c:1576
#18 0x00000000005aae93 in do_action (h=0x7fff37146e90, a=0x7f7016c23978,
msg=0x7f7016fbe208) at core/action.c:1207
#19 0x00000000005aeb1e in run_actions (h=0x7fff37146e90, a=0x7f7016c20748,
msg=0x7f7016fbe208) at core/action.c:1576
#20 0x00000000005a1ec6 in do_action (h=0x7fff37146e90, a=0x7f7016c576a0,
msg=0x7f7016fbe208) at core/action.c:1062
#21 0x00000000005aeb1e in run_actions (h=0x7fff37146e90, a=0x7f7016c0b770,
msg=0x7f7016fbe208) at core/action.c:1576
#22 0x000000000059e97d in do_action (h=0x7fff37146e90, a=0x7f7016bc9920,
msg=0x7f7016fbe208) at core/action.c:695
#23 0x00000000005aeb1e in run_actions (h=0x7fff37146e90, a=0x7f7016bb9100,
msg=0x7f7016fbe208) at core/action.c:1576
#24 0x00000000005af2a7 in run_top_route (a=0x7f7016bb9100,
msg=0x7f7016fbe208, c=0x0) at core/action.c:1661
#25 0x00000000005bcb7a in receive_msg (
buf=0xadbd80 <buf.6971> "INVITE sip:AAAAAAAAAA@W.X.Y.Z SIP/2.0\r\nVia:
SIP/2.0/UDP W.X.Y.Z;rport;branch=z9hG4bKZD8Za7p2XF7cF\r\nMax-Forwards:
69\r\nFrom: \"AAAAAAAAAA\" sip:AAAAAAAAAA@W.X.Y.Z;tag=aXy4Na2NtvKBK\r\nTo:
<"..., len=1011, rcv_info=0x7fff371474c0) at core/receive.c:423
#26 0x0000000000488e7f in udp_rcv_loop () at core/udp_server.c:548
#27 0x000000000042650f in main_loop () at main.c:1673
#28 0x000000000042ec18 in main (argc=7, argv=0x7fff37147c38) at main.c:2802
Is there anything relevant in this to know what's wrong?
Regards,
Igor.
--
Cet e-mail a été vérifié par le logiciel antivirus d'Avast.
www.avast.com
Hi,
I am building a configuration script where for some traffic flows
parallel forking will be needed, and in this case I need to go beyond
the default max limit of the max amount of branches.
Apart from this specific case the traffic load that kamailio will need
to handle is very low, let's say just one second here and there with up
to 5 concurrent calls at most.
This limit is as far as I understand set here:
usr/local/src/kamailio-5.6/kamailio/src/core/config.h
And the default limit is:
#define MAX_BRANCHES_LIMIT 32 /*!< limit of maximum
number of branches per transaction */
Here are the questions I have related to this:
1) If I increase the value of this constant in config.h, how high is it
reasonable to set this value and still have a stable system?
2) If I increase MAX_BRANCHES_LIMIT beyond 32, are there also other
parameters that needs to be changed for the system to be able to cope,
and if so whichparameters?
Regards,
Lars
Bastian,
According to mine stats all seems ok...
# kamctl stats shmem
{
"jsonrpc": "2.0",
"result": [
"shmem:fragments = 330",
"shmem:free_size = 1889418024",
"shmem:max_used_size = 258962624",
"shmem:real_used_size = 258065624",
"shmem:total_size = 2147483648",
"shmem:used_size = 248809096"
],
"id": 212042
}
pkg.stats also not showing any signs of exhausting.
For me what is working is server can send packets, but not receive em
back even. And it's happening only on 1 interface.
Le 30/08/2023 à 21:44, Bastian Triller a écrit :
> I'm asking, because we kind of observed similar behaviour after
> overload. We've did some load testing and at some point SHM was
> exhausted and that didn't recover itself. One process was using nearly
> 100% CPU. And I guess that at least sl replies still works at that point.
>
> On Wed, Aug 30, 2023 at 9:28 PM Ihor Olkhovskyi
> <igorolhovskiy(a)gmail.com> wrote:
>
> Bastian,
>
> I'm fine with retransmissions and so, but for me it's really
> interesting, that server is not restoring it's state after a while
> and only restart helps. And this is happening only with UDP on one
> interface, others are working fine over UDP
>
> Le 30/08/2023 à 17:19, Bastian Triller a écrit :
>> Maybe it's not beneficial to have a receive buffer that high. If
>> your presence server is stateful, it will send retransmissions if
>> packets get dropped. But it will also send retransmissions if
>> your proxy does not reply fast enough. So your proxy may be
>> damped by retransmissions. Did you check SHM/active transactions
>> on your proxy?
>>
>> Regards,
>> Bastian
>>
>> On Wed, Aug 30, 2023 at 3:56 PM Ihor Olkhovskyi
>> <igorolhovskiy(a)gmail.com> wrote:
>>
>> They are increasing, actually
>>
>> # ss -l -u -m src X.X.X.X/Y
>> State Recv-Q Send-Q Local
>> Address:Port Peer Address:Port
>> UNCONN 25167616 0 X.X.X.X:sip *:*
>> skmem:(r25167616,rb25165824,t0,tb65536,f2304,w0,o0,bl0,d514894)
>>
>> Le mer. 30 août 2023 à 15:04, Bastian Triller
>> <bastian.triller(a)gmail.com> a écrit :
>>
>> Are drops increasing on that socket while it is happening?
>> ss -l src <local_interface> -u sport 5060 -m
>>
>> On Tue, Aug 29, 2023 at 3:26 PM Ihor Olkhovskyi
>> <igorolhovskiy(a)gmail.com> wrote:
>>
>> Just to add some info
>>
>> netstat -nlp
>> Active Internet connections (only servers)
>> Proto Recv-Q Send-Q Local Address Foreign Address
>> State PID/Program name
>> ...
>> udp 25167616 0 <local_interface>:5060
>> 0.0.0.0:* 211759/kamailio
>> ...
>>
>> So I see a huge Receive Queue on UDP for Kamailio
>> which is not clearing.
>>
>> Le mar. 29 août 2023 à 14:29, Ihor Olkhovskyi
>> <igorolhovskiy(a)gmail.com> a écrit :
>>
>> Hello,
>>
>> I've faced a bit strange issue, but a bit of
>> preface. I have Kamailio as a proxy (TLS/WS <->
>> UDP) and second Kamailio as a presence server. At
>> some point presence server accepts around 5K
>> PUBLISH within 1 minute and sending around the
>> same amount of NOTIFY to proxy Kamailio.
>>
>> Proxy is "transforming" protocol to TLS, but at
>> sime point I'm starting to get these type of errors
>>
>> tm [../../core/forward.h:292]: msg_send_buffer():
>> tcp_send failed
>> tm [t_fwd.c:1588]: t_send_branch(): sending
>> request on branch 0 failed
>> <script>: [RELAY] Relay to
>> <sip:X.X.X.X:51571;transport=tls> failed!
>> tm [../../core/forward.h:292]: msg_send_buffer():
>> tcp_send failed
>> tm [t_fwd.c:1588]: t_send_branch(): sending
>> request on branch 0 failed
>>
>> Some of those messages are 100% valid as client
>> can go away or so. Some are not, cause I'm sure
>> client is alive and connected.
>>
>> But the problem comes later. At some moment proxy
>> Kamailio just stops accept UDP traffic on this
>> interface (where it also accepts all NOTIFY's),
>> at the start of the "stopping accepting" Kamailio
>> sends OPTIONS via DISPATCHER but not able to
>> receive 200 OK.
>>
>> Over TLS on the same interface all is ok. On
>> other (loopback) interface UDP is being processed
>> fine, so I don't suspert some limit on open files
>> here.
>>
>> Only restart of Kamailio proxy process helps in
>> this case.
>>
>> I've tuned net.core.rmem_max and
>> net.core.rmem_default to 25 Mb, so in theory
>> buffer should not be the case.
>>
>> Is there some internal "interface buffer" in
>> Kamailio that is not freed upon failure send or
>> maybe I've missed somethig?
>>
>> Kamailio 5.6.4
>>
>> fork=yes
>> children=12
>> tcp_children=12
>>
>> enable_tls=yes
>>
>> tcp_accept_no_cl=yes
>> tcp_max_connections=63536
>> tls_max_connections=63536
>> tcp_accept_aliases=no
>> tcp_async=yes
>> tcp_connect_timeout=10
>> tcp_conn_wq_max=63536
>> tcp_crlf_ping=yes
>> tcp_delayed_ack=yes
>> tcp_fd_cache=yes
>> tcp_keepalive=yes
>> tcp_keepcnt=3
>> tcp_keepidle=30
>> tcp_keepintvl=10
>> tcp_linger2=30
>> tcp_rd_buf_size=80000
>> tcp_send_timeout=10
>> tcp_wq_blk_size=2100
>> tcp_wq_max=10485760
>> open_files_limit=63536
>>
>> Sysctl
>>
>> # To increase the amount of memory available for
>> socket input/output queues
>> net.ipv4.tcp_rmem = 4096 25165824 25165824
>> net.core.rmem_max = 25165824
>> net.core.rmem_default = 25165824
>> net.ipv4.tcp_wmem = 4096 65536 25165824
>> net.core.wmem_max = 25165824
>> net.core.wmem_default = 65536
>> net.core.optmem_max = 25165824
>>
>> # To limit the maximum number of requests queued
>> to a listen socket
>> net.core.somaxconn = 128
>>
>> # Tells TCP to instead make decisions that would
>> prefer lower latency.
>> net.ipv4.tcp_low_latency=1
>>
>> # Optional (it will increase performance)
>> net.core.netdev_max_backlog = 1000
>> net.ipv4.tcp_max_syn_backlog = 128
>>
>> # Flush the routing table to make changes happen
>> instantly.
>> net.ipv4.route.flush=1
>> --
>> Best regards,
>> Ihor (Igor)
>>
>>
>>
>> --
>> Best regards,
>> Ihor (Igor)
>> __________________________________________________________
>> Kamailio - Users Mailing List - Non Commercial
>> Discussions
>> To unsubscribe send an email to
>> sr-users-leave(a)lists.kamailio.org
>> Important: keep the mailing list in the recipients,
>> do not reply only to the sender!
>> Edit mailing list options or unsubscribe:
>>
>> __________________________________________________________
>> Kamailio - Users Mailing List - Non Commercial Discussions
>> To unsubscribe send an email to
>> sr-users-leave(a)lists.kamailio.org
>> Important: keep the mailing list in the recipients, do
>> not reply only to the sender!
>> Edit mailing list options or unsubscribe:
>>
>>
>>
>> --
>> Best regards,
>> Ihor (Igor)
>> __________________________________________________________
>> Kamailio - Users Mailing List - Non Commercial Discussions
>> To unsubscribe send an email to sr-users-leave(a)lists.kamailio.org
>> Important: keep the mailing list in the recipients, do not
>> reply only to the sender!
>> Edit mailing list options or unsubscribe:
>>
>>
>> __________________________________________________________
>> Kamailio - Users Mailing List - Non Commercial Discussions
>> To unsubscribe send an email tosr-users-leave(a)lists.kamailio.org
>> Important: keep the mailing list in the recipients, do not reply only to the sender!
>> Edit mailing list options or unsubscribe:
>
Hi people,
I installed Kamailio 5.3 on Ubuntu Server 22 LTS, but I am not able to resolve several PHP errors that occur when I try to open the website http://localhost/siremis/.
Which Linux distro and version is best suited for installation?
Is Kamailio version 5.3 operational or is it better to use version 4.4?
Att.,
Daniel Hilário
INFRACOM/UFRGS
51 3308 4801