Before DMQ there were other ways to do similar things. Is DMQ now considered the best way? Assuming I am using recent stable?
Are there any good guides on clustering Kamailio? I am not finding a lot of good info on that subject. Specifically, for multi datacenter active active where things like floating IP's and keepalived are not really an option.
I think the realistic answer is that it's getting to be that way. I think DMQ is now the recommended way to share usrloc and htable across multiple hosts. DMQ+dialog is still a work in progress. Not sure about DMQ+some other new stuff.
On Fri, Jan 25, 2019 at 04:21:43PM +0000, Canuck . wrote:
Before DMQ there were other ways to do similar things. Is DMQ now considered the best way? Assuming I am using recent stable?
Are there any good guides on clustering Kamailio? I am not finding a lot of good info on that subject. Specifically, for multi datacenter active active where things like floating IP's and keepalived are not really an option.
Kamailio (SER) - Users Mailing List sr-users@lists.kamailio.org https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-users
Hello !
I'm using DMQ in order to share : - htable - usrloc
for usrloc seems everything working as expected.
on htable I have noticed, after some stresstest made with sipp (25 cps / 600 concurrent calls) that one of the nodes (or sometimes both) eat a lot of cpu (300%) after stresstest ended [all processes idle except f4 dmq handlers]; I tried also to change some parameters on dmq module like worker_usleep with no changes at all. At last, using kamailio in a kubernetes system I have several kamailio instances that pops-up and then tiers down with different ips; for this reason dmq seems not able to delete peers, adding, for every defunct node a warning like this : router-3 17(34) ERROR: dmq [notification_peer.c:599]: notification_resp_callback_f(): deleting server sip:172.28.1.211:5062;status=active because of failed request router-1 router-sr 17(33) ERROR: dmq [notification_peer.c:599]: notification_resp_callback_f(): deleting server sip:172.28.1.213:5062;status=active because of failed request
Any suggestion is very appreciated Thanks!
Il giorno sab 26 gen 2019 alle ore 01:43 Alex Balashov < abalashov@evaristesys.com> ha scritto:
I think the realistic answer is that it's getting to be that way. I think DMQ is now the recommended way to share usrloc and htable across multiple hosts. DMQ+dialog is still a work in progress. Not sure about DMQ+some other new stuff.
On Fri, Jan 25, 2019 at 04:21:43PM +0000, Canuck . wrote:
Before DMQ there were other ways to do similar things. Is DMQ now considered the best way? Assuming I am using recent stable?
Are there any good guides on clustering Kamailio? I am not finding a lot of good info on that subject. Specifically, for multi datacenter active active where things like floating IP's and keepalived are not really an option.
Kamailio (SER) - Users Mailing List sr-users@lists.kamailio.org https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-users
-- Alex Balashov | Principal | Evariste Systems LLC
Tel: +1-706-510-6800 / +1-800-250-5920 (toll-free) Web: http://www.evaristesys.com/, http://www.csrpswitch.com/
Kamailio (SER) - Users Mailing List sr-users@lists.kamailio.org https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-users
Am Samstag, 26. Januar 2019, 19:00:32 CET schrieb * Paolo Visintin -
I'm using DMQ in order to share :
- htable
- usrloc
for usrloc seems everything working as expected.
on htable I have noticed, after some stresstest made with sipp (25 cps / 600 concurrent calls) that one of the nodes (or sometimes both) eat a lot of cpu (300%) after stresstest ended [all processes idle except f4 dmq handlers]; I tried also to change some parameters on dmq module like worker_usleep with no changes at all.
Hello Paolo,
just commenting on the first issue:
So I understood you correctly, after the stress-test you observe a not ending CPU load on the Kamailio server, even without load?
Maybe you can have a look on the CPU consuming processes by attaching e.g. "strace" to them at the next occasion, to see what they are doing.
Best regards,
Henning
At last, using kamailio in a kubernetes system I have several kamailio instances that pops-up and then tiers down with different ips; for this reason dmq seems not able to delete peers, adding, for every defunct node a warning like this : router-3 17(34) ERROR: dmq [notification_peer.c:599]: notification_resp_callback_f(): deleting server sip:172.28.1.211:5062;status=active because of failed request router-1 router-sr 17(33) ERROR: dmq [notification_peer.c:599]: notification_resp_callback_f(): deleting server sip:172.28.1.213:5062;status=active because of failed request
Il giorno sab 26 gen 2019 alle ore 01:43 Alex Balashov <
abalashov@evaristesys.com> ha scritto:
I think the realistic answer is that it's getting to be that way. I think DMQ is now the recommended way to share usrloc and htable across multiple hosts. DMQ+dialog is still a work in progress. Not sure about DMQ+some other new stuff.
On Fri, Jan 25, 2019 at 04:21:43PM +0000, Canuck . wrote:
Before DMQ there were other ways to do similar things. Is DMQ now considered the best way? Assuming I am using recent stable?
Are there any good guides on clustering Kamailio? I am not finding a lot of good info on that subject. Specifically, for multi datacenter active active where things like floating IP's and keepalived are not really an option.
Hello Henning, Correctly understood, this is exactly what I've experienced (kamailio 5.2.0) I'll absolutely make a new lab-test and strace !
Thanks for your suggestion! Best regards
Il giorno sab 26 gen 2019 alle ore 19:05 Henning Westerholt hw@kamailio.org ha scritto:
Am Samstag, 26. Januar 2019, 19:00:32 CET schrieb * Paolo Visintin -
I'm using DMQ in order to share :
- htable
- usrloc
for usrloc seems everything working as expected.
on htable I have noticed, after some stresstest made with sipp (25 cps / 600 concurrent calls) that one of the nodes (or sometimes both) eat a lot of cpu (300%) after stresstest ended [all processes idle except f4 dmq handlers]; I tried also to change some parameters on dmq module like worker_usleep with no changes at all.
Hello Paolo,
just commenting on the first issue:
So I understood you correctly, after the stress-test you observe a not ending CPU load on the Kamailio server, even without load?
Maybe you can have a look on the CPU consuming processes by attaching e.g. "strace" to them at the next occasion, to see what they are doing.
Best regards,
Henning
At last, using kamailio in a kubernetes system I have several kamailio instances that pops-up and then tiers down with different ips; for this reason dmq seems not able to delete peers, adding, for every defunct
node a
warning like this : router-3 17(34) ERROR: dmq [notification_peer.c:599]: notification_resp_callback_f(): deleting server sip:172.28.1.211:5062;status=active because of failed request router-1 router-sr 17(33) ERROR: dmq [notification_peer.c:599]: notification_resp_callback_f(): deleting server sip:172.28.1.213:5062;status=active because of failed request
Il giorno sab 26 gen 2019 alle ore 01:43 Alex Balashov <
abalashov@evaristesys.com> ha scritto:
I think the realistic answer is that it's getting to be that way. I think DMQ is now the recommended way to share usrloc and htable across multiple hosts. DMQ+dialog is still a work in progress. Not sure about DMQ+some other new stuff.
On Fri, Jan 25, 2019 at 04:21:43PM +0000, Canuck . wrote:
Before DMQ there were other ways to do similar things. Is DMQ now considered the best way? Assuming I am using recent stable?
Are there any good guides on clustering Kamailio? I am not finding a lot of good info on that subject. Specifically, for multi datacenter active active where things like floating IP's and keepalived are not really an option.
-- Henning Westerholt - https://skalatan.de/blog/ Kamailio services - https://skalatan.de/services Kamailio security assessment - https://skalatan.de/de/assessment
Hello Henning, Due to some analysis (thanks for the activity to Enrico Bandiera and Giacomo Vacca) we found a bug in dmq module , internally made a quick and dirty patch and opened a issue on GitHub !
Cheers
Il giorno sab 26 gen 2019 alle 19:19 * Paolo Visintin - evosip.cloud paolo.visintin@evosip.cloud ha scritto:
Hello Henning, Correctly understood, this is exactly what I've experienced (kamailio 5.2.0) I'll absolutely make a new lab-test and strace !
Thanks for your suggestion! Best regards
Il giorno sab 26 gen 2019 alle ore 19:05 Henning Westerholt < hw@kamailio.org> ha scritto:
Am Samstag, 26. Januar 2019, 19:00:32 CET schrieb * Paolo Visintin -
I'm using DMQ in order to share :
- htable
- usrloc
for usrloc seems everything working as expected.
on htable I have noticed, after some stresstest made with sipp (25 cps / 600 concurrent calls) that one of the nodes (or sometimes both) eat a
lot
of cpu (300%) after stresstest ended [all processes idle except f4 dmq handlers]; I tried also to change some parameters on dmq module like worker_usleep with no changes at all.
Hello Paolo,
just commenting on the first issue:
So I understood you correctly, after the stress-test you observe a not ending CPU load on the Kamailio server, even without load?
Maybe you can have a look on the CPU consuming processes by attaching e.g. "strace" to them at the next occasion, to see what they are doing.
Best regards,
Henning
At last, using kamailio in a kubernetes system I have several kamailio instances that pops-up and then tiers down with different ips; for this reason dmq seems not able to delete peers, adding, for every defunct
node a
warning like this : router-3 17(34) ERROR: dmq [notification_peer.c:599]: notification_resp_callback_f(): deleting server sip:172.28.1.211:5062;status=active because of failed request router-1 router-sr 17(33) ERROR: dmq [notification_peer.c:599]: notification_resp_callback_f(): deleting server sip:172.28.1.213:5062;status=active because of failed request
Il giorno sab 26 gen 2019 alle ore 01:43 Alex Balashov <
abalashov@evaristesys.com> ha scritto:
I think the realistic answer is that it's getting to be that way. I think DMQ is now the recommended way to share usrloc and htable across multiple hosts. DMQ+dialog is still a work in progress. Not sure about DMQ+some other new stuff.
On Fri, Jan 25, 2019 at 04:21:43PM +0000, Canuck . wrote:
Before DMQ there were other ways to do similar things. Is DMQ now considered the best way? Assuming I am using recent stable?
Are there any good guides on clustering Kamailio? I am not finding
a
lot of good info on that subject. Specifically, for multi
datacenter
active active where things like floating IP's and keepalived are not really an option.
-- Henning Westerholt - https://skalatan.de/blog/ Kamailio services - https://skalatan.de/services Kamailio security assessment - https://skalatan.de/de/assessment
--
paolo visintin direttore tecnico ------------------------------ timenet srl | www.timenet.it | tel 05711738000 assistenza tecnica: assistenza@timenet.it ufficio commerciale: sales@timenet.it ufficio amministrativo: contabilita@timenet.it twitter.timenet.it | linkedin.timenet.it http://timenet.it/email/redirect.php
Hello Paolo,
This should fix your issue: https://github.com/kamailio/kamailio/commit/a176ad4fb4167e21b01974e6a5caba33...
Let me know so it can be merged.
Best,
Charles
On Thu, 21 Feb 2019 at 20:14, * Paolo Visintin - evosip.cloud paolo.visintin@evosip.cloud wrote:
Hello Henning, Due to some analysis (thanks for the activity to Enrico Bandiera and Giacomo Vacca) we found a bug in dmq module , internally made a quick and dirty patch and opened a issue on GitHub !
Cheers
Il giorno sab 26 gen 2019 alle 19:19 * Paolo Visintin - evosip.cloud paolo.visintin@evosip.cloud ha scritto:
Hello Henning, Correctly understood, this is exactly what I've experienced (kamailio 5.2.0) I'll absolutely make a new lab-test and strace !
Thanks for your suggestion! Best regards
Il giorno sab 26 gen 2019 alle ore 19:05 Henning Westerholt < hw@kamailio.org> ha scritto:
Am Samstag, 26. Januar 2019, 19:00:32 CET schrieb * Paolo Visintin -
I'm using DMQ in order to share :
- htable
- usrloc
for usrloc seems everything working as expected.
on htable I have noticed, after some stresstest made with sipp (25 cps
/
600 concurrent calls) that one of the nodes (or sometimes both) eat a
lot
of cpu (300%) after stresstest ended [all processes idle except f4 dmq handlers]; I tried also to change some parameters on dmq module like worker_usleep with no changes at all.
Hello Paolo,
just commenting on the first issue:
So I understood you correctly, after the stress-test you observe a not ending CPU load on the Kamailio server, even without load?
Maybe you can have a look on the CPU consuming processes by attaching e.g. "strace" to them at the next occasion, to see what they are doing.
Best regards,
Henning
At last, using kamailio in a kubernetes system I have several kamailio instances that pops-up and then tiers down with different ips; for this reason dmq seems not able to delete peers, adding, for every defunct
node a
warning like this : router-3 17(34) ERROR: dmq [notification_peer.c:599]: notification_resp_callback_f(): deleting server sip:172.28.1.211:5062;status=active because of failed request router-1 router-sr 17(33) ERROR: dmq [notification_peer.c:599]: notification_resp_callback_f(): deleting server sip:172.28.1.213:5062;status=active because of failed request
Il giorno sab 26 gen 2019 alle ore 01:43 Alex Balashov <
abalashov@evaristesys.com> ha scritto:
I think the realistic answer is that it's getting to be that way. I think DMQ is now the recommended way to share usrloc and htable
across
multiple hosts. DMQ+dialog is still a work in progress. Not sure
about
DMQ+some other new stuff.
On Fri, Jan 25, 2019 at 04:21:43PM +0000, Canuck . wrote:
Before DMQ there were other ways to do similar things. Is DMQ now considered the best way? Assuming I am using recent stable?
Are there any good guides on clustering Kamailio? I am not
finding a
lot of good info on that subject. Specifically, for multi
datacenter
active active where things like floating IP's and keepalived are
not
really an option.
-- Henning Westerholt - https://skalatan.de/blog/ Kamailio services - https://skalatan.de/services Kamailio security assessment - https://skalatan.de/de/assessment
--
paolo visintin direttore tecnico
timenet srl | www.timenet.it | tel 05711738000 assistenza tecnica: assistenza@timenet.it ufficio commerciale: sales@timenet.it ufficio amministrativo: contabilita@timenet.it twitter.timenet.it | linkedin.timenet.it http://timenet.it/email/redirect.php _______________________________________________ Kamailio (SER) - Users Mailing List sr-users@lists.kamailio.org https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-users
Hello Charles, thanks for fast fixing, tomorrow morning we will test and give you back a follow up!
Thanks!
*Paolo Visintin* *CTO* evosip.cloud [image: Risultati immagini per evosip]
Il giorno gio 21 feb 2019 alle ore 21:46 Charles Chance < charles.chance@sipcentric.com> ha scritto:
Hello Paolo,
This should fix your issue:
https://github.com/kamailio/kamailio/commit/a176ad4fb4167e21b01974e6a5caba33...
Let me know so it can be merged.
Best,
Charles
On Thu, 21 Feb 2019 at 20:14, * Paolo Visintin - evosip.cloud paolo.visintin@evosip.cloud wrote:
Hello Henning, Due to some analysis (thanks for the activity to Enrico Bandiera and Giacomo Vacca) we found a bug in dmq module , internally made a quick and dirty patch and opened a issue on GitHub !
Cheers
Il giorno sab 26 gen 2019 alle 19:19 * Paolo Visintin - evosip.cloud paolo.visintin@evosip.cloud ha scritto:
Hello Henning, Correctly understood, this is exactly what I've experienced (kamailio 5.2.0) I'll absolutely make a new lab-test and strace !
Thanks for your suggestion! Best regards
Il giorno sab 26 gen 2019 alle ore 19:05 Henning Westerholt < hw@kamailio.org> ha scritto:
Am Samstag, 26. Januar 2019, 19:00:32 CET schrieb * Paolo Visintin -
I'm using DMQ in order to share :
- htable
- usrloc
for usrloc seems everything working as expected.
on htable I have noticed, after some stresstest made with sipp (25
cps /
600 concurrent calls) that one of the nodes (or sometimes both) eat a
lot
of cpu (300%) after stresstest ended [all processes idle except f4 dmq handlers]; I tried also to change some parameters on dmq module like worker_usleep with no changes at all.
Hello Paolo,
just commenting on the first issue:
So I understood you correctly, after the stress-test you observe a not ending CPU load on the Kamailio server, even without load?
Maybe you can have a look on the CPU consuming processes by attaching e.g. "strace" to them at the next occasion, to see what they are doing.
Best regards,
Henning
At last, using kamailio in a kubernetes system I have several kamailio instances that pops-up and then tiers down with different ips; for
this
reason dmq seems not able to delete peers, adding, for every defunct
node a
warning like this : router-3 17(34) ERROR: dmq [notification_peer.c:599]: notification_resp_callback_f(): deleting server sip:172.28.1.211:5062;status=active because of failed request router-1 router-sr 17(33) ERROR: dmq [notification_peer.c:599]: notification_resp_callback_f(): deleting server sip:172.28.1.213:5062;status=active because of failed request
Il giorno sab 26 gen 2019 alle ore 01:43 Alex Balashov <
abalashov@evaristesys.com> ha scritto:
I think the realistic answer is that it's getting to be that way. I think DMQ is now the recommended way to share usrloc and htable
across
multiple hosts. DMQ+dialog is still a work in progress. Not sure
about
DMQ+some other new stuff.
On Fri, Jan 25, 2019 at 04:21:43PM +0000, Canuck . wrote: > Before DMQ there were other ways to do similar things. Is DMQ now > considered the best way? Assuming I am using recent stable? > > Are there any good guides on clustering Kamailio? I am not
finding a
> lot of good info on that subject. Specifically, for multi
datacenter
> active active where things like floating IP's and keepalived are
not
> really an option.
-- Henning Westerholt - https://skalatan.de/blog/ Kamailio services - https://skalatan.de/services Kamailio security assessment - https://skalatan.de/de/assessment
--
paolo visintin direttore tecnico
timenet srl | www.timenet.it | tel 05711738000 assistenza tecnica: assistenza@timenet.it ufficio commerciale: sales@timenet.it ufficio amministrativo: contabilita@timenet.it twitter.timenet.it | linkedin.timenet.it http://timenet.it/email/redirect.php _______________________________________________ Kamailio (SER) - Users Mailing List sr-users@lists.kamailio.org https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-users
-- *Charles Chance* Managing Director
t. 0330 120 1200 m. 07932 063 891
Sipcentric Ltd. Company registered in England & Wales no. 7365592. Registered office: Faraday Wharf, Innovation Birmingham Campus, Holt Street, Birmingham Science Park, Birmingham B7 4BB.