Hello!
We had several crashs of Kamailio (5.3.4) in the last weeks. Each time, the
last logs are:
/usr/local/sbin/kamailio[20942]: CRITICAL: <core> [core/forward.c:347]:
get_send_socket2(): unsupported proto 0 (*)
/usr/local/sbin/kamailio[20966]: CRITICAL: <core> [core/forward.c:347]:
get_send_socket2(): unsupported proto 111 (unknown)
When we look to the coredump, here below the first traces:
(gdb) bt
#0 0x00007f70131b3d91 in prepare_new_uac (t=0x7f6fe1d45340,
i_req=0x7f7016fbe208, branch=0, uri=0x7fff37142fa0, path=0x7fff37142f80,
next_hop=0x7f7016fbe480, fsocket=0x0,
snd_flags=..., fproto=0, flags=2, instance=0x7fff37142f70,
ruid=0x7fff37142f60, location_ua=0x7fff37142f50) at t_fwd.c:463
#1 0x00007f70131b7c42 in add_uac (t=0x7f6fe1d45340, request=0x7f7016fbe208,
uri=0x7f7016fbe480, next_hop=0x7f7016fbe480, path=0x7f7016fbe848, proxy=0x0,
fsocket=0x0,
snd_flags=..., proto=0, flags=2, instance=0x7f7016fbe858,
ruid=0x7f7016fbe870, location_ua=0x7f7016fbe880) at t_fwd.c:805
#2 0x00007f70131bfebd in t_forward_nonack (t=0x7f6fe1d45340,
p_msg=0x7f7016fbe208, proxy=0x0, proto=0) at t_fwd.c:1667
#3 0x00007f701316cf44 in t_relay_to (p_msg=0x7f7016fbe208, proxy=0x0,
proto=0, replicate=0) at t_funcs.c:332
#4 0x00007f70131a3a11 in _w_t_relay_to (p_msg=0x7f7016fbe208, proxy=0x0,
force_proto=0) at tm.c:1689
#5 0x00007f70131a4c51 in w_t_relay (p_msg=0x7f7016fbe208, _foo=0x0,
_bar=0x0) at tm.c:1889
#6 0x00000000005a1f57 in do_action (h=0x7fff37143d90, a=0x7f7016bd1b10,
msg=0x7f7016fbe208) at core/action.c:1071
#7 0x00000000005aeb1e in run_actions (h=0x7fff37143d90, a=0x7f7016bd1b10,
msg=0x7f7016fbe208) at core/action.c:1576
#8 0x00000000005af1df in run_actions_safe (h=0x7fff37146e90,
a=0x7f7016bd1b10, msg=0x7f7016fbe208) at core/action.c:1640
#9 0x000000000066aa50 in rval_get_int (h=0x7fff37146e90,
msg=0x7f7016fbe208, i=0x7fff37144238, rv=0x7f7016bd1e60, cache=0x0) at
core/rvalue.c:915
#10 0x000000000066f000 in rval_expr_eval_int (h=0x7fff37146e90,
msg=0x7f7016fbe208, res=0x7fff37144238, rve=0x7f7016bd1e58) at
core/rvalue.c:1913
#11 0x000000000066f453 in rval_expr_eval_int (h=0x7fff37146e90,
msg=0x7f7016fbe208, res=0x7fff371446ec, rve=0x7f7016bd2588) at
core/rvalue.c:1921
#12 0x00000000005a1a1d in do_action (h=0x7fff37146e90, a=0x7f7016bd2e08,
msg=0x7f7016fbe208) at core/action.c:1047
#13 0x00000000005aeb1e in run_actions (h=0x7fff37146e90, a=0x7f7016bcbd20,
msg=0x7f7016fbe208) at core/action.c:1576
#14 0x000000000059e97d in do_action (h=0x7fff37146e90, a=0x7f7016ef19b8,
msg=0x7f7016fbe208) at core/action.c:695
#15 0x00000000005aeb1e in run_actions (h=0x7fff37146e90, a=0x7f7016e3f578,
msg=0x7f7016fbe208) at core/action.c:1576
#16 0x000000000059e97d in do_action (h=0x7fff37146e90, a=0x7f7016c220a0,
msg=0x7f7016fbe208) at core/action.c:695
#17 0x00000000005aeb1e in run_actions (h=0x7fff37146e90, a=0x7f7016c220a0,
msg=0x7f7016fbe208) at core/action.c:1576
#18 0x00000000005aae93 in do_action (h=0x7fff37146e90, a=0x7f7016c23978,
msg=0x7f7016fbe208) at core/action.c:1207
#19 0x00000000005aeb1e in run_actions (h=0x7fff37146e90, a=0x7f7016c20748,
msg=0x7f7016fbe208) at core/action.c:1576
#20 0x00000000005a1ec6 in do_action (h=0x7fff37146e90, a=0x7f7016c576a0,
msg=0x7f7016fbe208) at core/action.c:1062
#21 0x00000000005aeb1e in run_actions (h=0x7fff37146e90, a=0x7f7016c0b770,
msg=0x7f7016fbe208) at core/action.c:1576
#22 0x000000000059e97d in do_action (h=0x7fff37146e90, a=0x7f7016bc9920,
msg=0x7f7016fbe208) at core/action.c:695
#23 0x00000000005aeb1e in run_actions (h=0x7fff37146e90, a=0x7f7016bb9100,
msg=0x7f7016fbe208) at core/action.c:1576
#24 0x00000000005af2a7 in run_top_route (a=0x7f7016bb9100,
msg=0x7f7016fbe208, c=0x0) at core/action.c:1661
#25 0x00000000005bcb7a in receive_msg (
buf=0xadbd80 <buf.6971> "INVITE sip:AAAAAAAAAA@W.X.Y.Z SIP/2.0\r\nVia:
SIP/2.0/UDP W.X.Y.Z;rport;branch=z9hG4bKZD8Za7p2XF7cF\r\nMax-Forwards:
69\r\nFrom: \"AAAAAAAAAA\" sip:AAAAAAAAAA@W.X.Y.Z;tag=aXy4Na2NtvKBK\r\nTo:
<"..., len=1011, rcv_info=0x7fff371474c0) at core/receive.c:423
#26 0x0000000000488e7f in udp_rcv_loop () at core/udp_server.c:548
#27 0x000000000042650f in main_loop () at main.c:1673
#28 0x000000000042ec18 in main (argc=7, argv=0x7fff37147c38) at main.c:2802
Is there anything relevant in this to know what's wrong?
Regards,
Igor.
--
Cet e-mail a été vérifié par le logiciel antivirus d'Avast.
www.avast.com
Hello,
we are considering to organize another edition of Kamailio Developers
Meeting, to be hosted again by sipgate.de in Dusseldorf, Germany. There
is no fee for participants (i.e., free entry), but each one has to take
care of travelling and accommodation (drinks and food are provided
during the day).
A few developers are already on board, we are looking to see if anyone
else from the broader community wants to join us. Likely it is going to
be during 2 days (with the option of doing it during three days still
open at this moment, if there is going to be enough interest to have it
longer). The proposed time frames are:
- Nov 7-9, 2023
- Nov 21-23, 2023
The event is about discussing the internals of Kamailio (e.g.,
architectural changes, development strategies) and writing code or
documentation for it. This is not an event about learning how to use
Kamailio, but to work on improving it. As a bonus, there will be a
social networking event in the evening with some fun activities, drinks
and food.
If you want to join, state your preference for any of the two time
frames (or even just for 2 days in one of the two time frames). It is
not a large meeting room, if we get too many join requests, we may have
to select based on past contributions, and if that is not the case, then
is just the first come first served.
You can reply to this thread or just write me directly if you want to
join and when you prefer to happen.
By mid of next week, we plan to select the event dates so everyone that
comes to it can start planning the trip.
Cheers,
Daniel
--
Daniel-Constantin Mierla (@ asipto.com)
twitter.com/miconda -- linkedin.com/in/miconda
Kamailio Consultancy - Training Services -- asipto.com
Kamailio World Conference - kamailioworld.com
Hello Team,
I hope you're all doing well. I'm reaching out to request your insights on the following scenario.
Overview:
I have successfully configured Kamailio to authenticate IP addresses, primarily for PBXs and customers unable to register. With this setup, I can flawlessly receive incoming traffic and forward it to our internal FreeSwitch servers.
Current issue:
Now, I'm focusing on managing traffic in the opposite direction—specifically for DIDs. I have a DID table set up in Kamailio that associates DID numbers with their respective IP peers. Upon receiving a call, a database lookup is performed to find the destination number, after which the call is terminated at the customer's IP.
I'm facing difficulty in handling situations where a customer has multiple IP peers (for redundancy). My aim is to try each of these IPs sequentially when terminating a specific DID call towards them. The current logic I'm employing for the INVITE lookup looks like this:
# Database lookup for INVITEs
if (is_method("INVITE")) {
sql_query("didrouting_db", "SELECT route_to FROM did_routing WHERE did_number='$rU'", "result");
if ($dbr(result=>rows) > 0) {
$var(route_to) = $dbr(result=>[0,0]);
xlog("L_INFO", "Routing DID $rU to $var(route_to)\n");
$du = "sip:" + $var(route_to) + ":5060";
} else {
$du = "sip:" + FS_IP + ":5060";
}
}
The above logic works fine for a single IP.
I intend to use a comma-separated list of IPs in my route_to column (for example 192.168.1.2,192.168.1.3, ...) and then iterate through these IPs one by one. I'm having trouble coming up with a workable logic or loop to achieve this. Would you be able to offer any guidance or suggestions?
Please also recommend if there is a better approach available to handle this scenario.
Thank you in advance for your valuable input.
Regards,
Shah Hussain
Hi all
I see this old thread from 2015.
- https://kamailio.org/mailman3/hyperkitty/list/sr-users@lists.kamailio.org/t…
It's about max_inv_lifetime not working properly.
I have this problem today: SIP-180 responses from downstream
apparently reset max_inv_lifetime, and the transaction can live for
hours (apparently indefinitely) even though max_inv_lifetime is set to
just over a minute.
By spacing the SIP-180 responses (from downstream) slightly longer
apart than than max_inv_lifetime in kamailio, I don't get the problem,
so it looks to me that this is TM not matching what's in the
documentation (which mentions that this should be a hard limit).
With this configuration, SIP-180 responses every minute result in the
transaction never ending (until the client CANCELs the call).
- kamailio.cfg:modparam("tm", "fr_timer", 3000)
- kamailio.cfg:modparam("tm", "fr_inv_timer", 300000)
- kamailio.cfg:modparam("tm", "max_inv_lifetime", 65000)
- kamailio.cfg:modparam("tm", "noisy_ctimer", 1)
That other thread ends with Daniel mentioning that he'd have to look
at the code, but nothing further.
Running "git log --grep max_inv_lifetime" shows me that the last/only
update that wasn't just documentation was in 2007.
Has anyone got an ideas?
James
Hi list,
I'm trying to use Kamailio 4.4.4 with rtpengine in a self-inflicted
emergency situation (didn't monitor traffic growth properly and now
encountering packet loss during peak times) as a drop-in replacement for
an overloaded Asterisk box in a call-termination-to-upstream-carrier
scenario.
My test scenario is to make a call from a SIP softphone to Asterisk IP
1.1.1.1 -> Kamailio/rtpengine IP 2.2.2.2 -> Upstream carrier 3.3.3.3
sngrep on Kamailio box 2.2.2.2 - the following SDP will not work -
carrier is rejecting it. Carrier is authenticating our calls based on
our IP address 2.2.2.2, no username/pass involved.
2023/09/22 02:06:49.216136 2.2.2.2:5060 -> 3.3.3.3:5060
INVITE sip:+32xxxxxxxx@2.2.2.2;user=phone SIP/2.0
Record-Route: <sip:2.2.2.2;lr>
Via: SIP/2.0/UDP
2.2.2.2;branch=z9hG4bKd9c3.d6fa3abe5d52b827e2054de5573028e0.0
Via: SIP/2.0/UDP 1.1.1.1:5060;branch=z9hG4bK473270e8
Max-Forwards: 69
From: "61xxxxxxxxx" <sip:+61xxxxxxxxx@1.1.1.1>;tag=as3d75aadd
To: <sip:+32xxxxxxxx@2.2.2.2;user=phone>
Contact: <sip:+61xxxxxxxxx@1.1.1.1:5060>
Call-ID: 3f31e1622a72b6d17f24e42362f4f1d0@1.1.1.1:5060
CSeq: 102 INVITE
User-Agent: Asterisk PBX 20.0.0
Date: Fri, 22 Sep 2023 00:06:50 GMT
Session-Expires: 1800
Min-SE: 90
Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY,
INFO, PUBLISH, MESSAGE
Supported: replaces, timer
P-Asserted-Identity: <sip:+61xxxxxxxxx@2.2.2.2;user=phone>
Content-Type: application/sdp
Content-Length: 314
X-SIP: 1.1.1.1
v=0
o=root 1093000903 1093000903 IN IP4 1.1.1.1
s=Asterisk PBX 20.0.0
c=IN IP4 2.2.2.2
t=0 0
m=audio 25742 RTP/AVP 8 9 0 101
a=maxptime:150
a=rtpmap:8 PCMA/8000
a=rtpmap:9 G722/8000
a=rtpmap:0 PCMU/8000
a=rtpmap:101 telephone-event/8000
a=fmtp:101 0-16
a=sendrecv
a=rtcp:25743
a=ptime:20
I'm comparing this rejected INVITE to a successful INVITE sent by the
original Asterisk box at IP 2.2.2.2 (now Kamailio box) to the carrier
without Kamailio in the path, and these are the differences I noticed,
and probably the things I have to mimick with Kamailio in order to make
it work:
INVITE sip:+32xxxxxxxxx@2.2.2.2;user=phone SIP/2.0
should be
INVITE sip:+32xxxxxxxxx@3.3.3.3;user=phone SIP/2.0
To: <sip:+32xxxxxxxx@2.2.2.2;user=phone>
should be
To: <sip:+32xxxxxxxx@3.3.3.3;user=phone>
From: "61xxxxxxxxx" <sip:+61xxxxxxxxx@1.1.1.1>;tag=as3d75aadd
should be
From: "61xxxxxxxxx" <sip:+61xxxxxxxxx@2.2.2.2>;tag=as3d75aadd
Contact: <sip:+61xxxxxxxxx@1.1.1.1:5060>
should be
Contact: <sip:+61xxxxxxxxx@2.2.2.2:5060>
Call-ID: 3f31e1622a72b6d17f24e42362f4f1d0@1.1.1.1:5060
should be
Call-ID: 3f31e1622a72b6d17f24e42362f4f1d0@2.2.2.2:5060
o=root 1093000903 1093000903 IN IP4 1.1.1.1
should be
o=root 1093000903 1093000903 IN IP4 2.2.2.2
My kamailio.cfg can be found here: https://pastebin.com/6PKcRjPU
These are the Asterisk boxes I want to originate calls from to Kamailio:
[root@voip30 ~]# kamctl address show
+-----+-----+----------+------+------+-----------+
| id | grp | ip_addr | mask | port | tag |
+-----+-----+----------+------+------+-----------+
| 195 | 1 | 1.1.1.1 | 32 | 0 | voip20.sv |
| 196 | 1 | 1.1.1.2 | 32 | 0 | voip21.sv |
| 197 | 1 | 1.1.1.3 | 32 | 0 | voip22.sv |
| 198 | 1 | 1.1.1.4 | 32 | 0 | voip23.sv |
| 199 | 1 | 1.1.1.5 | 32 | 0 | voip24.sv |
| 200 | 1 | 1.1.1.6 | 32 | 0 | voip25.sv |
| 201 | 1 | 1.1.1.7 | 32 | 0 | voip26.sv |
| 202 | 1 | 1.1.1.8 | 32 | 0 | voip27.sv |
| 203 | 1 | 1.1.1.9 | 32 | 0 | voip28.sv |
+-----+-----+----------+------+------+-----------+
This is the upstream carrier I want Kamailio to proxy calls to:
[root@voip30 ~]# kamctl dispatcher show
dispatcher gateways
+----+-------+------------------+-------+-------+------------+------+
| id | setid | destination | flags | prio. | attrs | desc |
+----+-------+------------------+-------+-------+------------+------+
| 12 | 1 | sip:3.3.3.3:5060 | 0 | 0 | weight=100 | |
+----+-------+------------------+-------+-------+------------+------+
(output manually slightly modified to look properly over E-Mail)
As you might have guessed I'm a Kamailio noob... and don't have the
resources to learn it as fast as I must to avoid further packet loss. If
there's anyone available who can help me to get this done today,
optionally in exchange for money, I'd be grateful.
Thank you!
Markus
What's the best way to store multi-dimensional data within an htable? For example, storing an avp stack:
$avp(foo) = "first";
$avp(foo) = "second";
$sht(bar=>foo) = $avp(foo);
The result of which is that only $avp(foo[0]) is stored in the htable:
kamcmd htable.dump bar
{
entry: 2
size: 1
slot: {
{
name: bar
value: second
type: str
}
}
}
The htable documentation shows support for lists, but it doesn't make clear if this is really usable outside of loading from a database, and looks like it's short hand for managing/creating linked lists. The other option would be to simply serialize the data before storing it in the htable - which is fine, I just want to be sure I'm not overlooking a more convenient method.
Regards,
Kaufman
Hello all
we are seeing these kind of logs in a Debian GNU/Linux 11 using kamailio
5.5.6
Sep 21 17:41:13 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155702]:
ERROR: <core> [core/forward.c:183]: get_out_socket(): no corresponding
socket found for(udp:164.152.22.248:5060)
Sep 21 17:41:13 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155698]:
ERROR: <core> [core/forward.c:183]: get_out_socket(): no corresponding
socket found for(udp:208.74.138.184:5060)
Sep 21 17:41:13 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155695]:
ERROR: <core> [core/forward.c:183]: get_out_socket(): no corresponding
socket found for(udp:208.74.138.181:5060)
Sep 21 17:41:14 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155693]:
ERROR: <core> [core/forward.c:183]: get_out_socket(): no corresponding
socket found for(udp:38.102.250.60:5060)
Sep 21 17:41:14 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155707]:
ERROR: <core> [core/forward.c:183]: get_out_socket(): no corresponding
socket found for(udp:87.1.1.27:5060)
Sep 21 17:41:14 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155707]:
ERROR: tm [ut.h:302]: uri2dst2(): no corresponding socket for "87.1.1.27"
af 2
Sep 21 17:41:14 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155707]:
ERROR: tm [t_fwd.c:470]: prepare_new_uac(): can't fwd to af 2, proto 1 (no
corresponding listening socket)
Sep 21 17:41:14 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155697]:
ERROR: <core> [core/forward.c:183]: get_out_socket(): no corresponding
socket found for(udp:208.74.138.184:5060)
Sep 21 17:41:14 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155694]:
ERROR: <core> [core/forward.c:183]: get_out_socket(): no corresponding
socket found for(udp:192.40.216.97:5060)
Sep 21 17:41:14 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155694]:
ERROR: <core> [core/forward.c:183]: get_out_socket(): no corresponding
socket found for(udp:208.74.138.180:5060)
Sep 21 17:41:14 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155714]:
ERROR: <core> [core/forward.c:183]: get_out_socket(): no corresponding
socket found for(udp:87.1.1.27:5060)
The kamailio instance we are using is receiving around 1500 calls per
second in average when we do start seeing these errors more frequently
In this instance we are using multihomed
the listen address list is
children=14
socket_workers=1
listen=udp:192.168.99.70:5081
listen=udp:192.168.99.81:5060
listen=udp:87.1.1.27:5060
listen=tcp:192.168.96.105:8095
listen=tcp:87.1.1.27:5060
tcp_children=6
port=5060
We use the first worker only to perform the OPTIONS requests from the
dispatcher module.
The sockets
listen=udp:192.168.99.81:5060
listen=udp:87.1.1.27:5060
are used to communicate both A and B legs
so when a message is received at 192.168.99.81:5060 we send it to
87.1.1.27:5060 and viceversa
We were using $fs variable before doing the t_relay() function.
This way is working, but when load increases, seems sometimes kamailio
doesn't get the proper socket to forward the reply.
I think the errors mainly are related to responses which are being
forwarded, like if the function get_sock_info_list was not able to retrieve
the listen interfaces
we have tried to set in the onreply routes the commands
set_recv_socket("udp:192.168.99.81:5060");
set_send_socket("udp:87.1.1.27:5060"); (when reply goes from private to
public domain)
and it seems it reduces the number of
Sep 21 17:41:13 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155702]:
ERROR: <core> [core/forward.c:183]: get_out_socket(): no corresponding
socket found for(udp:164.152.22.248:5060)
Sep 21 17:41:13 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155698]:
ERROR: <core> [core/forward.c:183]: get_out_socket(): no corresponding
socket found for(udp:208.74.138.184:5060)
Sep 21 17:41:13 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155695]:
ERROR: <core> [core/forward.c:183]: get_out_socket(): no corresponding
socket found for(udp:208.74.138.181:5060)
Sep 21 17:41:14 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155693]:
ERROR: <core> [core/forward.c:183]: get_out_socket(): no corresponding
socket found for(udp:38.102.250.60:5060)
errors
But i honestly don't know why the
Sep 21 17:41:14 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155707]:
ERROR: tm [ut.h:302]: uri2dst2(): no corresponding socket for "87.1.1.27"
af 2
Sep 21 17:41:14 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155707]:
ERROR: tm [t_fwd.c:470]: prepare_new_uac(): can't fwd to af 2, proto 1 (no
corresponding listening socket)
appear
We aldso tried to increase the kernel buffers rmem and wmem from 208KB (as
we had by default) to 4MB
Do you know a reason which could cause these logs to appear?
Any setup that could mitigate them?
thanks a lot and regards
david escartin
--
<http://www.sonoc.io/>
David Escartín
NOC engineer
www.sonoc.io
[image: LinkedIn] <https://www.linkedin.com/company/sonoc>
[image: Twitter] <https://twitter.com/sonoc_>
[image: Instagram] <https://www.instagram.com/sonoc.io/>
This e-mail is for the exclusive use of its recipients and may contain
business secrets or other confidential or privileged information. Any
unauthorised use, copying, publication or distribution of this e-mail is
strictly prohibited. If you are not the intended recipient, please inform
us immediately by replying to this e-mail and delete it, including any
attachments or copies on your system.
In accordance with the GDPR (EU) 2016/679 and the LOPDGDD 3/2018, we inform
you that this e-mail address and/or any other personal data you have
provided us with will be treated by SONOC with absolute confidentiality and
with the only purpose of providing you with the requested services, due to
your condition as a client, supplier or because you have requested
information from us at any time. These data will only be kept for as long
as required to comply with legal obligations. You can exercise your rights
at any time by sending an e-mail to: dataprotection(a)sonoc.io.
<dataprotection(a)sonoc.io>
Hello all
we are seeing these kind of logs in a Debian GNU/Linux 11 using kamailio
5.5.6
Sep 21 17:41:13 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155702]:
ERROR: <core> [core/forward.c:183]: get_out_socket(): no corresponding
socket found for(udp:164.152.22.248:5060)
Sep 21 17:41:13 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155698]:
ERROR: <core> [core/forward.c:183]: get_out_socket(): no corresponding
socket found for(udp:208.74.138.184:5060)
Sep 21 17:41:13 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155695]:
ERROR: <core> [core/forward.c:183]: get_out_socket(): no corresponding
socket found for(udp:208.74.138.181:5060)
Sep 21 17:41:14 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155693]:
ERROR: <core> [core/forward.c:183]: get_out_socket(): no corresponding
socket found for(udp:38.102.250.60:5060)
Sep 21 17:41:14 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155707]:
ERROR: <core> [core/forward.c:183]: get_out_socket(): no corresponding
socket found for(udp:87.1.1.27:5060)
Sep 21 17:41:14 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155707]:
ERROR: tm [ut.h:302]: uri2dst2(): no corresponding socket for "87.1.1.27"
af 2
Sep 21 17:41:14 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155707]:
ERROR: tm [t_fwd.c:470]: prepare_new_uac(): can't fwd to af 2, proto 1 (no
corresponding listening socket)
Sep 21 17:41:14 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155697]:
ERROR: <core> [core/forward.c:183]: get_out_socket(): no corresponding
socket found for(udp:208.74.138.184:5060)
Sep 21 17:41:14 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155694]:
ERROR: <core> [core/forward.c:183]: get_out_socket(): no corresponding
socket found for(udp:192.40.216.97:5060)
Sep 21 17:41:14 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155694]:
ERROR: <core> [core/forward.c:183]: get_out_socket(): no corresponding
socket found for(udp:208.74.138.180:5060)
Sep 21 17:41:14 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155714]:
ERROR: <core> [core/forward.c:183]: get_out_socket(): no corresponding
socket found for(udp:87.1.1.27:5060)
The kamailio instance we are using is receiving around 1500 calls per
second in average when we do start seeing these errors more frequently
In this instance we are using multihomed
the listen address list is
children=14
socket_workers=1
listen=udp:192.168.99.70:5081
listen=udp:192.168.99.81:5060
listen=udp:87.1.1.27:5060
listen=tcp:192.168.96.105:8095
listen=tcp:87.1.1.27:5060
tcp_children=6
port=5060
We use the first worker only to perform the OPTIONS requests from the
dispatcher module.
The sockets
listen=udp:192.168.99.81:5060
listen=udp:87.1.1.27:5060
are used to communicate both A and B legs
so when a message is received at 192.168.99.81:5060 we send it to
87.1.1.27:5060 and viceversa
We were using $fs variable before doing the t_relay() function.
This way is working, but when load increases, seems sometimes kamailio
doesn't get the proper socket to forward the reply.
I think the errors mainly are related to responses which are being
forwarded, like if the function get_sock_info_list was not able to retrieve
the listen interfaces
we have tried to set in the onreply routes the commands
set_recv_socket("udp:192.168.99.81:5060");
set_send_socket("udp:87.1.1.27:5060"); (when reply goes from private to
public domain)
and it seems it reduces the number of
Sep 21 17:41:13 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155702]:
ERROR: <core> [core/forward.c:183]: get_out_socket(): no corresponding
socket found for(udp:164.152.22.248:5060)
Sep 21 17:41:13 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155698]:
ERROR: <core> [core/forward.c:183]: get_out_socket(): no corresponding
socket found for(udp:208.74.138.184:5060)
Sep 21 17:41:13 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155695]:
ERROR: <core> [core/forward.c:183]: get_out_socket(): no corresponding
socket found for(udp:208.74.138.181:5060)
Sep 21 17:41:14 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155693]:
ERROR: <core> [core/forward.c:183]: get_out_socket(): no corresponding
socket found for(udp:38.102.250.60:5060)
errors
But i honestly don't know why the
Sep 21 17:41:14 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155707]:
ERROR: tm [ut.h:302]: uri2dst2(): no corresponding socket for "87.1.1.27"
af 2
Sep 21 17:41:14 lax-dedge-1 /usr/local/kamailio/sbin/kamailio[3155707]:
ERROR: tm [t_fwd.c:470]: prepare_new_uac(): can't fwd to af 2, proto 1 (no
corresponding listening socket)
appear
We aldso tried to increase the kernel buffers rmem and wmem from 208KB (as
we had by default) to 4MB
Do you know a reason which could cause these logs to appear?
Any setup that could mitigate them?
thanks a lot and regards
david escartin
--
<http://www.sonoc.io/>
David Escartín
NOC engineer
www.sonoc.io
[image: LinkedIn] <https://www.linkedin.com/company/sonoc>
[image: Twitter] <https://twitter.com/sonoc_>
[image: Instagram] <https://www.instagram.com/sonoc.io/>
This e-mail is for the exclusive use of its recipients and may contain
business secrets or other confidential or privileged information. Any
unauthorised use, copying, publication or distribution of this e-mail is
strictly prohibited. If you are not the intended recipient, please inform
us immediately by replying to this e-mail and delete it, including any
attachments or copies on your system.
In accordance with the GDPR (EU) 2016/679 and the LOPDGDD 3/2018, we inform
you that this e-mail address and/or any other personal data you have
provided us with will be treated by SONOC with absolute confidentiality and
with the only purpose of providing you with the requested services, due to
your condition as a client, supplier or because you have requested
information from us at any time. These data will only be kept for as long
as required to comply with legal obligations. You can exercise your rights
at any time by sending an e-mail to: dataprotection(a)sonoc.io.
<dataprotection(a)sonoc.io>
Hello,
due to some changes from certain cloud infrastructure provider related to spam filtering with DMARC, we need to change the list manager "mailman" configuration for the following lists:
* sr-users
* sr-user-es
* sr-dev
* business
* sr-docs
* kamailio-announce
* the administrative mailing list
The change will impact the "From" that is displayed in your email program. Without that change large e-mail providers (Gmail, Microsoft O365) will reject messages from a sender with certain strict DMARC policy. This is already affecting several people on the lists (for example with GitHub notice mails), but over time the problem will certainly increase.
Regarding the technical change:
We will munge the From: header so it doesn't contain the domain that triggers the DMARC rejection. Essentially, the Mailman list takes ownership of the message and injects its own address into the From: header. This can affect reply-to-sender, although we add the original From: address in the Reply-To: header (or sometimes the Cc: header) to reduce the impact of this. We plan to do this for all emails unconditionally, regardless of their sender DMARC policy, to have an identical behaviour for all users. Please refer to this [1] page for more details. For a general description of DMARC refer to this page [2].
This change should have not a big impact for you, but some users might need to adapt some mail filters. Therefore, we would like to gather feedback on that change from you. We expect that the most visible change is that It will be not possible anymore to just reply by e-mail to mails from GitHub and have them "automatically" added to the discussed issue. This feature was so far used rarely and only from a few people.
We plan to implement this change by the end of this week, when its done it will be also shortly announced.
Best regards,
Henning Westerholt
[1] https://wiki.list.org/DEV/DMARC
[2] https://dmarc.org/overview/
--
Henning Westerholt - https://skalatan.de/blog/
Kamailio services - https://gilawa.com<https://gilawa.com/>