On Aug 29, 2024, at 5:05 PM, Ben Kaufman bkaufman@bcmone.com wrote:
• so it looks like success.
How is it not success? It is not just "not dropping messages". All messages are responded to in only slightly longer than the 1 second delay provided by the web server. How is handling 300 request per second rather than 2 (the number of children) not an improvement in throughput?
"Looks like success [with the tacit insinuation that it's actually not]" was probably uncharitable. You're right that
However, it's not an increase in _throughput_. It's a work around Kamailio's concurrency architecture vis-a-vis HTTP. You've just created an elastic buffer for slow HTTP requests. There is, essentially, process A (SIP worker) and process B (async worker), and they both process the request the same way.
Moving the work to process B is beneficial because it's not exposed to incoming SIP packets, while process A is. Instead of waiting on HTTP requests in processes of type A, you're waiting on them in processes of type B. You're still blocking a process and waiting. Vitally, the throughput is still bounded by process B and by available memory, and, more to the point, the considerations, and limitations, around increasing the number of processes, of either the A or B type, are the same.
The picture I painted was:
"asynchronous processing just means liberating the transaction from the main worker processes, which are part of a relatively small fixed-size pool, by suspending it and shipping it to another set of ancillary worker processes"
Your critique of this was, as I understood it:
"this does not simply 'hand off the transaction' to another pool of workers which then accumulate load."
My only aim here is to say that this is, in fact, an accurate characterisation of what is happening. You are handing off the transaction to another pool of workers. I also meant to convey that Kamailio's async model is more coarse than that of async I/O in other execution environments.
-- Alex