*
so it looks like success.
How is it not success? It is not just "not dropping messages". All messages
are responded to in only slightly longer than the 1 second delay provided by the web
server. How is handling 300 request per second rather than 2 (the number of children) not
an improvement in throughput?
Kaufman
Senior Voice Engineer
E: bkaufman(a)bcmone.com
SIP.US Client Support: 800.566.9810 | SIPTRUNK Client Support: 800.250.6510 |
Flowroute Client Support: 855.356.9768
[img]<https://www.sip.us/>
[
________________________________
From: Alex Balashov via sr-users <sr-users(a)lists.kamailio.org>
Sent: Thursday, August 29, 2024 3:45 PM
To: Kamailio (SER) - Users Mailing List <sr-users(a)lists.kamailio.org>
Cc: Alex Balashov <abalashov(a)evaristesys.com>
Subject: [SR-Users] Re: http_async and tm
CAUTION: This email originated from outside the organization. Do not click links or open
attachments unless you recognize the sender and know the content is safe.
You might be thinking about it backwards. The throughput is the same in process A as it is
in process B. The difference is that process B isn't in the critical path of new SIP
packets, so it looks like success.
Consider a config in which you take every request, put it on an mqueue, dequeue it inside
an async worker (i.e. while(mq_fetch(...)), and then do a blocking HTTP query inside that
loop instead. You would get the same throughput, which is to say that the resumption of
the transaction upon HTTP reply is just an implementational detail, not a saliently
throughput-boosting feature.
I suppose "not dropping SIP messages" can be viewed as a separate dimension of
work from the call processing, and in that sense, you're freeing up the main SIP
workers to consume more packets. I just have a problem with this take because the packets
are then all bound for the same pipeline, and ultimately, the same async worker(s).
The throughput-enhancing value of async depends on liberating workers to do _other_ things
while $slow_workload unspools. If the $slow_workload _is_ the work, you're just moving
food around on the plate. Whether you're marching straight from Maogong or stop at
Songpan, you still end up at Yan'an.
-- Alex
On Aug 29, 2024, at 4:18 PM, Ben Kaufman
<bkaufman(a)bcmone.com> wrote:
I'm not sure I understand how it's not getting more throughput. Every request
got its reply in only a little over a second, from the first request to the 18,000
request. Using a blocking http request this config (low number of workers) died
quickly.
The original post in this thread (i.e. the real-world example) was a 302-redirect server.
My example traded that for a 404, because it was easier to implement than returning
values from the web server to make a 302 reply, but certainly extracting information from
the http reply and putting it into the SIP reply is trivial.
Kaufman
Senior Voice Engineer
E: bkaufman(a)bcmone.com
SIP.US Client Support: 800.566.9810 | SIPTRUNK Client Support: 800.250.6510 |
Flowroute Client Support: 855.356.9768
From: Alex Balashov via sr-users <sr-users(a)lists.kamailio.org>
Sent: Thursday, August 29, 2024 2:41 PM
To: Kamailio (SER) - Users Mailing List <sr-users(a)lists.kamailio.org>
Cc: Alex Balashov <abalashov(a)evaristesys.com>
Subject: [SR-Users] Re: http_async and tm
CAUTION: This email originated from outside the organization. Do not click links or open
attachments unless you recognize the sender and know the content is safe.
Ben,
You're absolutely correct that the transaction is resumed upon receipt of an HTTP
reply, and that the async workers do not block on this event.
This is also true of a variety of other patterns and workflows which shuttle the
transactions off to an async process pool, not really specific to the HTTP client modules.
`tsilo` comes to mind.
My overview was meant to make a more general point: once the transaction is resumed, the
processing is otherwise linear and identical in every other respect to how it would unfold
in the main SIP worker process. This is what I meant when I said:
"This does not cause the processing to enter some generally more
asynchronous mode in any other respect, and in that sense, is quite
different to what most people have in mind when they think of
asynchronous processing in the context of general-purpose
programming runtimes."
My over-arching point is that throughput is still limited by the (fixed) size of the
async worker pool, and that the same considerations about its sizing apply in async land
as in regular child process land.
For this reason, I don't think you are correct to conclude from this exercise that
using asynchronous mode / `http_async_client` increases _throughput_. Your example config
has one async worker, and that worker can only handle one resumed transaction at a time.
This is not a net increase in throughput. You say in your documentation: "Re-running
the load test will result in heavily blocked and dropped SIP requests", which is
absolutely true; in this paradigm, you are liberating the SIP workers from the congestion
that would cause "blocked and dropped SIP requests", by doing what I described
in my original message: "suspending it and shipping it to another set of ancillary
worker processes".
The real conflation here is between "not dropping SIP requests" and
"increased throughput". There is no increased throughput, only a deep queue of
suspended transactions lined up at the async worker pool. This goes back to the question I
raised in a previous message: what exactly is meant by "handle"? Are you
"not dropping" 300 CPS, or are you relaying 300 CPS worth of requests? I fully
believe that one async worker can churn through hundreds of resumed transactions per
second, driven by near-simultaneous HTTP replies, when the effect is simply to send a
'404 Not Found', as in your config. Real-world workloads don't particularly
resemble that in most cases.
You're not getting more _throughput_. You're changing where the blocking occurs
so as to not obstruct receipt of further SIP messages, which is precisely what I meant
when I said: "suspending it and shipping it to another set of ancillary worker
processes".
From the perspective of not dropping SIP messages, sweeping these transactions out of the
main worker pool is indeed quite effective, and if it is in that narrow sense of "not
dropping" that you meant "keeping up with the requests", then you are quite
right. I was thinking about bottom-line throughput capacity. I think there is a
misconception out there that other config script operations also acquire an asynchronous
character, beyond the initial "shipping", as it were, and this is the idea I
meant to single out.
-- Alex
On Aug 29, 2024, at 3:13 PM, Ben Kaufman
<bkaufman(a)bcmone.com> wrote:
I believe that there's a conflation of the http_async_client module and the async
module going on here. My understanding (and testing bears out), that the http async
client module works in the manner you describe as being "mediated by external
events" in that the http reply is the external trigger that resumes the transaction.
It appears that the module abstracts any actual calls to suspend/resume.
I created a test project that demonstrates this:
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.co…
Using the container deminy/delayed-http-response to create an http service that sleeps 1
second and then replies, and having Kamailio set with 2 child listening processes, and 1
http_async_client worker, a simple use of the http_async_query() function handles
300 cps sustained over a minute with no problems. This passes all of the latency load
onto the http server. Yes, if the http server cannot handle the request load as the
requests increase, that will be a problem, but I think the understanding of how this
module works is incorrect.
Kaufman
Senior Voice Engineer
E: bkaufman(a)bcmone.com
SIP.US Client Support: 800.566.9810 | SIPTRUNK Client Support: 800.250.6510 |
Flowroute Client Support: 855.356.9768
From: Alex Balashov via sr-users <sr-users(a)lists.kamailio.org>
Sent: Tuesday, August 27, 2024 6:57 AM
To: Kamailio (SER) - Users Mailing List <sr-users(a)lists.kamailio.org>
Cc: Alex Balashov <abalashov(a)evaristesys.com>
Subject: [SR-Users] Re: http_async and tm
CAUTION: This email originated from outside the organization. Do not click links or open
attachments unless you recognize the sender and know the content is safe.
On Aug 27, 2024, at 4:17 AM, Henning Westerholt
via sr-users <sr-users(a)lists.kamailio.org> wrote:
The asynchronous HTTP client only helps you if you are having other traffic that can be
handled without the need for HTTP API calls, and/or if you are having traffic
fluctuations, so you can prevent blocking by buffering requests in memory basically.
Indeed. It's also worth reiterating that the meaning of "asynchronous" is
somewhat environmentally and implementationally specific.
As the term has entered general use with the popularity of single-threaded / single event
loop multiplexing systems, such as Node and JavaScript, it has come to refer to a
programming and processing pattern in which the waiting and detection of I/O is delegated
to the OS kernel network stack. The OS takes care of this juggling and calls event hooks
or callbacks in your program when there is I/O to consume, or sets some flag or condition
to indicate this so that you can read the I/O from some OS buffer at your convenience. In
this way, your program is able to proceed executing other kinds of things while the OS is
taking care of waiting on I/O. Provided that the workload consists of waiting on I/O and
also other things, this is to the general benefit of "other things", not the
I/O.
In Kamailio, asynchronous processing just means liberating the transaction from the main
worker processes, which are part of a relatively small fixed-size pool, by suspending it
and shipping it to another set of ancillary worker processes, also part of a relatively
small, fixed-size pool. Within those ancillary worker processes, the execution is as
linear, synchronous and blocking as it would be in the main worker processes. This does
not cause the processing to enter some generally more asynchronous mode in any other
respect, and in that sense, is quite different to what most people have in mind when they
think of asynchronous processing in the context of general-purpose programming runtimes.
The only real footnote to this is about situations in which the resumption of the
transaction in the async workers is mediated by external events, e.g. a POST-back into
Kamailio's `xhttp` server. While this does not change the nature of the subsequent
synchronous execution of the route logic, it does mean that neither a core CIP worker nor
an async worker is tied up while some kind of external processing is playing out.
-- Alex
--
Alex Balashov
Principal Consultant
Evariste Systems LLC
Web:
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fevaristes…
Tel: +1-706-510-6800
__________________________________________________________
Kamailio - Users Mailing List - Non Commercial Discussions
To unsubscribe send an email to sr-users-leave(a)lists.kamailio.org
Important: keep the mailing list in the recipients, do not reply only to the sender!
Edit mailing list options or unsubscribe:
--
Alex Balashov
Principal Consultant
Evariste Systems LLC
Web:
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fevaristes…
Tel: +1-706-510-6800
__________________________________________________________
Kamailio - Users Mailing List - Non Commercial Discussions
To unsubscribe send an email to sr-users-leave(a)lists.kamailio.org
Important: keep the mailing list in the recipients, do not reply only to the sender!
Edit mailing list options or unsubscribe:
Tel: +1-706-510-6800
__________________________________________________________
Kamailio - Users Mailing List - Non Commercial Discussions
To unsubscribe send an email to sr-users-leave(a)lists.kamailio.org
Important: keep the mailing list in the recipients, do not reply only to the sender!
Edit mailing list options or unsubscribe: