My comments in between, with [Kalki70]:
First, the variables must be declared at the beginning
of the functions/blocks, to be coherent with the rest of the code.
[Kalki70] I did not know you had this convention. I am more used to the
"declare the variable when you need it" approach, as it makes code clearer
(matter of opinion). I will change that.
Then, the new code is doing new-hashing over an
integer converted to string, which then do new-hashing over the returned index (again
converted to string). I do not like the approach. Why not just do hashid modulo number of
active destinations then selecting the destination address by iterating in the list with
active destinations, instead of picking up directly by index.
[Kalki70] I agree that it is not nice. I am new to the code, and couldn't find
a way to do a native hash over an integer. It seemed to be that everything was using a
hash over a string in the end. If you can direct me to the appropriate available function
to do hash over integer, I can change it. The code is already doing hashid modulo and then
iterating over a list of destinations. I am trying to fix that.
Anyhow, I think the existing algorithms should stay
like they are now, you can add a new algorithm hashing the value in a variable (like
algorithm 7) with the re-hashing you want. Algorithm 7 can substitute all the other
hashing algorithms by setting the appropriate value in the hashing variabel (eg,
$var(dshash) = $ci for algorithm 0).
[Kalki70] : Here I disagree. I am not proposing a new algorithm, I am trying to FIX the
current implementation of hashing algorithms, as I consider them faulty and useless in a
real production environment. Every destination must be running at less than 50% capacity,
to be able to support the full load of a faulty destination. That means, duplicating
resources. Even worse. You may have 50 destinations, but if two consecutive entries in the
dispatcher.list fail, the next one will receive triple load, its own plus 100% of load of
the two failed destinations. How do you protect from that scenario? Impossible in a real
solution. It would have been better (not so bad, but not really "better") to use
an "M+N" approach. Use hash over M destinations, and if destination is down, use
one of the N spares, instead of overloading current active ones.
I open the debate here. I think the current implementation of hash algorithm, when
destination is down, is faulty. I asked about this on kamailio users list, but received no
answer. I also reported a strange behavior when entries in dispatcher.list are
duplicated.
https://lists.kamailio.org/pipermail/sr-users/2020-June/109518.html
I have a question about the reverse order of destinations. As you can see on the issue
#2361 , when a destination fails, the code selects the "next", but that is
really the previous in the file dispatcher.list. Also, if we use the
"use_default" flag, the last destination should not be used, unless all other
destinations fail. Again, if you try it, you see it's not the last destination, but
the first.
It seems that somehow the list fo destinations in the file are loaded in reverse, so
iterating over the next actually goes to the previous. The last one is actually the
first.
Best regards,
Luis Rojas (user Kalki70).
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/pull/2363#issuecomment-648824201