Hello,
starting here a discussion about an issue with the database API and signed/unsigned number type to see how to address it.
So far, the database API supported only signed types for numbers, respectively DB1_INT (int in c) and DB1_BIGINT (long long in c). However, many database table columns are defined as UNSIGNED INT or UNSIGNED BIGINT. The database connector modules are mapping UNSIGED values retrieved from database over signed fields in DB API structure. The other way around is done as well: signed values are the inserted in the unsigned columns.
There is no issue if the value in C is a positive number, however, if the value goes over MAX_UINT/2 (over 2147483647), it becomes negative and inserting the value in database results in an exception and 0 being stored instead.
So far, I guess the issue was rarely exposed, if at all, because no report on it, even these data types for DB1 are since the SER project was started in 2001. As I looked at database definition schema, most of unsigned columns are for internal flags or ids (e.g., lcr_id), where I guess no large values were used or needed so far.
However, it can bite at any time and needs to be addressed. So far, two solutions come in mind:
1) drop using UNSIGNED INT for those db columns, use only INT and do unsigned cast in the C code when reading and cast to int when writing.
2) update the db connector modules to support unsigned types -- I added support for them in DB API, but each db_* module has to be updated. The also each module that uses UNSIGNED DB columns must be updated
1) should be simpler, 2) more work but better in long term
Any other ideas? Which of the options you prefer to go for?
Not to forget: somehow related, probably we have to switch from int to long for PV number values, otherwise timestamp variables can go negative once unix timestamp approaches MAX_UINT/2 (still plenty of time, but should not be delayed for long ...).
Cheers, Daniel