On 02/16/2015 04:51 PM, Igor Potjevlesch wrote:
Thank you Alex.
I'm not sure to understand the parameter "size" associated to the hashtable.
I have setup 4. So, I understand that I can have 2^4 entries. Does it mean that, if the table is composed with $ci+$ft, I can have 16 concurrent calls store into the table?
Not exactly. The size refers to the number of unique buckets in the hash table. Entries do not have to be in unique buckets; however, insofar as the aim of a hash table is to provide O(1) lookup capability, it is desirable, from the point of view of the algorithmic concept, to have sufficient buckets to accommodate this. Even a very good hash algorithm can only compute to so many unique buckets if there are very few of them.
This is functionality that you take for granted in high-level interpreted languages, e.g. %hashes in Perl or $associative_arrays[] in PHP. The computer science underpinnings are exposed here at a more low level. If you really want to understand the idea behind the htable module and why it has that name:
http://en.wikipedia.org/wiki/Hash_table
If you're interested only in the applied dimension and not the formal properties, then the guidance is: provide enough bucket space to accommodate at least the maximal number of elements you reasonably expect to store concurrently. The memory allocation per bucket is trivial nowadays, so you really don't need to worry about this. For example, if you expect to store 5000 elements concurrently, just provide 2^13 (8192) buckets.
-- Alex