Then I see it would take too much effort to add an asynchronous resolver, but I don't think the lack of memory is the right answer: your proposal of using 500 children is more memory demanding than forking, isn't it? You can just add a configuration variable, max_fork=500 and with , for instance, children=3 your deployment will require much less memory than just children=500!!!!!!!!! Nevertheless, for simplicity (in the sense of less points of failure and a more stable system) it is better to stay with the "common" resolver.
Samuel.
Unclassified.
Andrei Pelinescu-Onciul andrei@iptel.org 06/07/05 04:49PM >>>
On Jun 07, 2005 at 14:56, Samuel Osorio Calvo <samuel.osorio@nl.thalesgroup..com> wrote:
If DNS is slow, or misconfigured (e.g. a zone is delegated to a nameserver which is down), the thread will be blocked for several seconds. E.g. if you use debian woody and 2 nameservers in /etc/resolv.conf, the timeout is 20 seconds. If you are lucky, the OS allows configuration of the DNS timeouts. Nevertheless, you have to consider that a ser thread will be blocked up to 20 seconds. This has impacts on your configuration:
I don't know the details but would it be really difficult to use an asyncrhonous resolver, such as resiprocate SIP stack does with ARES?? Besides exec_* calls, the main SER's performance bottelneck is the DNS resolving step thus it would be a great improvement adding asyncronous DNS queries.
Using asynchronous dns would work as long as you have memory to save the state of the pending dns request. It could be easily attacked in the same way (lots of DNS requests that will take a long time to resolve => out of memory => no more messages processed). Besides using it would mean saving the complete state of the message and of the ser processing of the message in the moment the dns request was made. For example if you make a dns request in module foo, function bar() you should be able to continue from exactly the same point in exactly the same state, when you receive the reply. This would mean something equivalent to saving the whole call trace (the whole stack for that matter) and a lot of global variables. The ammount of complexity involved in converting ser to such a model (where such a detailed state is saved that it makes possible resuming processing at a later time) would be huge. I don't think this would be doable in finite time :-)
As an alternative one could fork threads (which would save all the information involved except all the global vars.), or new processes (which would save everything). However in this case we would deal with the forking overhead. This can be attacked too (turning ser into a fork bomb). I think it's much better to start ser with lots of children processes (let's say 500, or the maximum acceptable for your machine configuration).
So, I don't think async. dns would be a solution.
[...]
Andrei
On Jun 08, 2005 at 10:21, Samuel Osorio Calvo samuel.osorio@nl.thalesgroup.com wrote:
Then I see it would take too much effort to add an asynchronous resolver, but I don't think the lack of memory is the right answer: your proposal of using 500 children is more memory demanding than forking, isn't it? You can just add a configuration variable, max_fork=500 and with , for instance, children=3 your deployment will require much less memory than just children=500!!!!!!!!!
It would require only very little more memory, at least on systems that use copy-on-write for forking (like linux for example). In this case all the forked processes start sharing the same memory pages. Only when a write occurs in some shared page, a copy of it would be created (=> more physical memory allocated). Besides on demand forking introduces forking overhead, which while very small on some system (linux) cannot be ignored (and can be quite high on other OSes). Having on-demand forking and starting with let's say 50 children wouldn't hurt, but I don't think it would introduce any major advantage compared to starting directly with much more children. I think it would be even a little slower when the forking starts. Introducing this in ser has very very little priority right now.
Nevertheless, for simplicity (in the sense of less points of failure and a more stable system) it is better to stay with the "common" resolver.
Actually if you fork or start a new thread when doing DNS lookups you would not need async DNS.
I don't think async DNS would be doable with ser. It's very good when you do not have a lot of state to save (e.g doing a http request, you just retry the request when the dns answer comes), but not appropiate for something like ser. I might be wrong, so if somebody has an ideea how this could work in ser, please speak. Who knows, maybe besides my doubts we could find some corner cases where we could use it.
Probably a better approach would be to cache all the DNS request in ser (and cache even not resolvable addresses for a small time) and have a small timeout for ser dns lookups. Even if the request will fail due to the timeout the answer will arrive sometime in the future and at least ser DNS cache would be updated. Future request (e.g. retransmissions) might succeed.
Andrei