Hello,
are you defining a lot of kamailio variables in your ruby script? In other words, are you using variables with different name, like $var(xyz) or $sht(a=>xyz), where xyz is passed from/computed in ruby script and is changing depending of the sip message?
Cheers,
Daniel
Thanks Daniel, you’re fantastic!
I have 4 children/workers configured with -m 128 -M 32. The machine in question has 512MB of memory, 1 core and 1GB swap on an SSD.
I restarted Kamailio with memlog=1 and I’ve been sending batches of 30 calls in. I’ve noticed 4 of the 13 Kamailio processes going up in memory after each batch, which I suspect to be the primary children/workers. Immediately post restart:
root 28531 0.7 5.5 329368 27196 ? Sl 22:48 0:00 /usr/local/sbin/kamailio -DD -P /var/run/kamailio/kamailio.pid -f /etc/kamailio/kamailio.cfg -m 128 -M 32root 28532 0.6 4.9 329368 24528 ? Sl 22:48 0:00 /usr/local/sbin/kamailio -DD -P /var/run/kamailio/kamailio.pid -f /etc/kamailio/kamailio.cfg -m 128 -M 32root 28533 0.6 5.5 329368 27244 ? Sl 22:48 0:00 /usr/local/sbin/kamailio -DD -P /var/run/kamailio/kamailio.pid -f /etc/kamailio/kamailio.cfg -m 128 -M 32root 28534 0.7 5.4 329368 26788 ? Sl 22:48 0:00 /usr/local/sbin/kamailio -DD -P /var/run/kamailio/kamailio.pid -f /etc/kamailio/kamailio.cfg -m 128 -M 32
After about 90 calls:
root 28531 0.0 6.7 330688 32948 ? Sl 22:48 0:00 /usr/local/sbin/kamailio -DD -P /var/run/kamailio/kamailio.pid -f /etc/kamailio/kamailio.cfg -m 128 -M 32root 28532 0.0 6.5 330560 32264 ? Sl 22:48 0:00 /usr/local/sbin/kamailio -DD -P /var/run/kamailio/kamailio.pid -f /etc/kamailio/kamailio.cfg -m 128 -M 32root 28533 0.0 6.5 330556 32272 ? Sl 22:48 0:00 /usr/local/sbin/kamailio -DD -P /var/run/kamailio/kamailio.pid -f /etc/kamailio/kamailio.cfg -m 128 -M 32root 28534 0.0 6.6 330564 32592 ? Sl 22:48 0:00 /usr/local/sbin/kamailio -DD -P /var/run/kamailio/kamailio.pid -f /etc/kamailio/kamailio.cfg -m 128 -M 32
None of the other 9 Kamailio processes are increasing at all.
I ran corex.pkg_summary against one of them and got the following dump:
I can see a lot of allocation to pvapi.c, does this indicate maybe I’m setting PVs that need to be unset?
Here’s another after another 60 calls:
root 28531 0.0 6.9 330820 33928 ? Sl 22:48 0:00 /usr/local/sbin/kamailio -DD -P /var/run/kamailio/kamailio.pid -f /etc/kamailio/kamailio.cfg -m 128 -M 32root 28532 0.0 6.7 330692 33352 ? Sl 22:48 0:00 /usr/local/sbin/kamailio -DD -P /var/run/kamailio/kamailio.pid -f /etc/kamailio/kamailio.cfg -m 128 -M 32root 28533 0.0 6.7 330688 33280 ? Sl 22:48 0:00 /usr/local/sbin/kamailio -DD -P /var/run/kamailio/kamailio.pid -f /etc/kamailio/kamailio.cfg -m 128 -M 32root 28534 0.0 6.7 330696 33192 ? Sl 22:48 0:00 /usr/local/sbin/kamailio -DD -P /var/run/kamailio/kamailio.pid -f /etc/kamailio/kamailio.cfg -m 128 -M 32
The only changes I’ve made on this config over the last couple of weeks (since I saw this issue) is removing the dispatcher module and adding in a small function in app_ruby (which I already use) to query redis (which I also already use from app_ruby and make a heap of queries per call) for some values and write $du manually. I also added in the topoh module.
It also makes a lot of sense to me to monitor the individual processes rather than the aggregate. Is there a way to identify simply from bash what processes are workers programmatically? I’d like to monitor just those individually in my monitoring.
Thanks!
Andrew
On 1 Aug 2019, at 8:24 pm, Daniel-Constantin Mierla <miconda@gmail.com> wrote:
Hello,
if it is pkg, then you have to see which process is increasing the use of memory, because it is private memory, specific for each process. The sum is an indicator, but the debugging has to be done for a specific process/pid.
Once you indentify a process that is leaking pkg, execute the rpc command:
- https://www.kamailio.org/docs/modules/devel/modules/corex.html#corex.rpc.pkg_summary
When that process is doing some runtime work (e.g., handling of a sip message), the syslog will get a summary with used pkg chunks. Send those log messages here for analysis. You have to set memlog core parameter to a value smaller than debug.
Cheers,
Daniel
On 01.08.19 03:43, Andrew White wrote:
Hi all,
I had a Kamailio crash the other day, and some debugging showed I ran out of PKG memory.
Since then I’ve run a simple bash script to compile the amount of memory used by all child processes, effective /usr/local/sbin/kamcmd pkg.stats | grep real_used summed together. I’ve graphed out the data, and there’s a clear growth of PKG memory going on, mostly increasing during our busier daytime hours.
Based on this, I suspect either a module loaded or something within my app_ruby conf is leaking memory.
I’ve been reading through https://www.kamailio.org/wiki/tutorials/troubleshooting/memory, but I’m a bit nervous, as I’m not really a C/deep memory type of guy. I can see a GDB script I can attach to Kamailio, but is that going to use significant resources to run or impact the running process? Is there a newer/better/alternative way to do this, and to help me break this down?
Thanks!
Andrew
_______________________________________________ Kamailio (SER) - Users Mailing List sr-users@lists.kamailio.org https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-users-- Daniel-Constantin Mierla -- www.asipto.com www.twitter.com/miconda -- www.linkedin.com/in/miconda
-- Daniel-Constantin Mierla -- www.asipto.com www.twitter.com/miconda -- www.linkedin.com/in/miconda