### Description We are experiencing sticky OOM state after an extended period (several days). `kamcmd` inspection of mem use shows memory consumed (leaked) via `pv_cache_add(347)` calls.
Inspection of that function suggests nothing can ever be dropped from the cache -- rendering it leaky. `_pv_cache_counter` is tested twice but after initialization (to zero) never appears to be assigned (incremented or otherwise) and so the conditions needed to call `pv_cache_drop` are never met.
### Troubleshooting
#### Debugging Data
``` May 5 23:55:40 eng-X X[1107]: 1(105) INFO: qm_sums: count= 1 size= 2048 bytes from core: core/counters.c: cnt_hash_add(383) May 5 23:55:40 eng-X X[1107]: 1(105) INFO: qm_sums: count= 18473 size= 5571880 bytes from core: core/pvapi.c: pv_cache_add(347) May 5 23:55:40 eng-X X[1107]: 1(105) INFO: qm_sums: count= 162 size= 9344 bytes from core: core/pvapi.c: pv_parse_format(1150) ```
### Additional Information
``` kamcmd> ver kamailio 5.2.2 (x86_64/linux) kamcmd> ```
* **Operating System**:
<!-- Details about the operating system, the type: Linux (e.g.,: Debian 8.4, Ubuntu 16.04, CentOS 7.1, ...), MacOS, xBSD, Solaris, ...; Kernel details (output of `uname -a`) -->
``` Linux soak 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux ```
AFAIK pv_cache works that way. We put the pv vars that we use in the config in private mem and it should not be cleaned, ever. There's no leak.
Are you using different pv names for every SIP message processing? Are you using KEMI?
Yes we are using KEMI (python bindings) and use the htable module. It appears that `pv_cache_drop` is especially crafted for dropping `$sht` variables for this use case. But there's no obvious path to it ever being called.
@gormania - indeed, there was an issue with incrementing/decrementing that counter. Can you test with latest master (or the patch referenced above)? If all ok, then it will be backported.
Thanks -- have applied it to 5.2.2 and will be soak testing over the next couple of days.
Closed #1948.
Reopen if still an issue.