Hello,
the performance of KEMI was compared when using Kamailio exported
functions, in the way that a function implemented inside Kamailio has
pretty much same performance when using it inside a kemi script or
inside native config file. For example, if one uses
KSR.auth.pv_auth_check(...) inside kemi script or pv_auth_check(...)
inside kamailio.cfg, then the performance is very much the same.
Now, if you use a pure python/lua/... library inside your kemi script
and it has poor performances, then of course kamailio is slowed down, it
is nothing that can be done by kamailio code to improve that.
If you need to access database, sqlops functions are also exported to kemi.
To summarize, the goal was that the kemi engine does not add any (or at
least not significant) performance to execute Kamailio-specific
functions compared to execution by native config interpreter.
Cheers,
Daniel
On 13.01.25 03:56, Daniel W. Graham via sr-users wrote:
Native config resulted in over 10k calls per second
without any drops.
I made some adjustments to the python config, looks like there was an
issue with the way I had been using return. That is handling almost
10k calls now.
However, when adding in other logic such as database calls using
python libraries, back to a few dozen calls before the queue grows and
drops occur.
If performance is expected to be close to native, then I probably need
to continue identifying configuration issues.
-dan
------------------------------------------------------------------------
*From:* Daniel W. Graham via sr-users <sr-users(a)lists.kamailio.org>
*Sent:* Sunday, January 12, 2025 10:32
*To:* Henning Westerholt <hw(a)gilawa.com>om>; Kamailio (SER) - Users
Mailing List <sr-users(a)lists.kamailio.org>
*Cc:* Daniel W. Graham <dan(a)cmsinter.net>
*Subject:* [SR-Users] Re: Performance issues with KEMI
*Caution:* This email originated from outside of the organization. Do
not click links or open attachments unless you recognize the sender
and know the content is safe
I haven't converted this test to native configuration yet. That is my
next step to rule out any general issues. I've never experienced any
performance issues like this before and since this was my first test
using KEMI I assumed it was related in some fashion. I've read through
performance optimization related posts and documents that I've been
able to find.
-dan
------------------------------------------------------------------------
*From:* Henning Westerholt <hw(a)gilawa.com>
*Sent:* Sunday, January 12, 2025 03:52
*To:* Kamailio (SER) - Users Mailing List <sr-users(a)lists.kamailio.org>
*Cc:* Daniel W. Graham <dan(a)cmsinter.net>
*Subject:* RE: Performance issues with KEMI
*Caution:* This email originated from outside of the organization. Do
not click links or open attachments unless you recognize the sender
and know the content is safe
Hello Daniel,
I am wondering if your issues are specific to KEMI, e.g. if you’ve
also tried the same script logic with a native cfg and observing
similar numbers? If it’s a simple script, you can maybe just repeat
the same test. There were benchmarks done for KEMI some years ago
which only showed a small performance difference.
Or do you have generic performance issues which you just happened to
observe in your test with KEMI? In this case it would be more of a
generic performance optimization topic.
Cheers,
Henning
--
Henning Westerholt –
https://skalatan.de/blog/ <https://skalatan.de/blog/>
Kamailio services –
https://gilawa.com <https://gilawa.com/>
*From:* Daniel W. Graham via sr-users <sr-users(a)lists.kamailio.org>
*Sent:* Sonntag, 12. Januar 2025 08:21
*To:* sr-users(a)lists.kamailio.org
*Cc:* Daniel W. Graham <dan(a)cmsinter.net>
*Subject:* [SR-Users] Performance issues with KEMI
Testing out KEMI functionality and running into performance issues. If
I exceed 150 calls per second the network receive queue grows and
Kamailio is unable to keep up with requests and they begin dropping.
KEMI script for testing is just doing a stateless reply to invites.
Using python3s module.
I've played with Kamailio child processes and memory allocations, but
there is no impact. I've also attempted some buffer / memory tweaking
at the OS level, again with no impact. Increasing CPU cores and even
running the test on bare metal results in the same.
Example of receive queue at 150 calls per second -
Netid State Recv-Q Send-Q Local Address:Port
udp UNCONN 337280 0 x.x.x.x:5060
Just wondering if anyone has experienced similar issues or has an
example of the performance they are seeing before I continue down this
path.
Thanks,
- dan
__________________________________________________________
Kamailio - Users Mailing List - Non Commercial Discussions --
sr-users(a)lists.kamailio.org
To unsubscribe send an email to sr-users-leave(a)lists.kamailio.org
Important: keep the mailing list in the recipients, do not reply only to the sender!
--
Daniel-Constantin Mierla (@
asipto.com)
twitter.com/miconda --
linkedin.com/in/miconda
Kamailio Consultancy, Training and Development Services --
asipto.com