Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

duplicates ignored when using aggmode #500

Open
astrakid opened this issue Oct 2, 2021 · 8 comments · May be fixed by #501
Open

duplicates ignored when using aggmode #500

astrakid opened this issue Oct 2, 2021 · 8 comments · May be fixed by #501

Comments

@astrakid
Copy link
Contributor

astrakid commented Oct 2, 2021

hi,
i am not sure if this is intended, but i couldn't find it in the documentation.
when in meter's channel-section duplicates AND aggmode is configured, then data is written every time and duplicate-parameter seems to be ignored. as soon as i disable "aggmode", duplicate-parameter is used.

see below my config extract. aggmode is now commented out.

       {
          "enabled": true,               // disabled meters will be ignored (default)
          "skip": false,                  // errors when opening meter may be ignored if enabled
          "protocol": "sml",              // meter protocol, see 'vzlogger -h' for full list
          "host": "10.c.c.ddd:3000",   // uri if meter not locally connected using <device>
          "aggtime": 10,                  // aggregate meter readings and send middleware update after <aggtime> seconds
          "aggfixedinterval": true,
          "channels": [
              {
              //### bezug hauptzähler
              "api": "volkszaehler",      // middleware api, default volkszaehler
              "uuid": "xxxxxxxxx-xxxx-xxxx-xxxx-db391bf3e7ab",
              "middleware": "http://localhost/api",
              "identifier": "1-0:1.8.0",  /* bezug */
              "duplicates": 600,           // duplicate handling, default 0 (send duplicate values)
//            "aggmode": "avg",
              },

with aggmode in use these data have been written to the database, the latest datapoint was written without aggmode:

id   	channel_id	timestamp	value	
5553876	2	1633172243524	853471.2	
5553873	2	1633172230000	853471.2	
5553866	2	1633172200000	853471.2	
5553863	2	1633172190000	853471.2	
5553858	2	1633172170000	853471.2	
5553855	2	1633172160000	853471.2	
5553852	2	1633172150000	853471.2	
5553849	2	1633172110000	853471.2	
5553846	2	1633172100000	853471.2	
5553843	2	1633172090000	853471.2	
5553838	2	1633172070000	853471.2	
5553833	2	1633172050000	853471.2	
5553830	2	1633172040000	853471.2	
5553828	2	1633172030000	853471.2	
5553826	2	1633172020000	853471.2	
5553821	2	1633172010000	853471.2	
5553818	2	1633172000000	853471.2	
5553815	2	1633171960000	853471.2	
5553810	2	1633171940000	853471.2	
5553805	2	1633171920000	853471.2	
5553802	2	1633171910000	853471.2	
5553799	2	1633171900000	853471.2	
5553796	2	1633171890000	853471.2	
5553793	2	1633171880000	853471.2	
@astrakid
Copy link
Contributor Author

astrakid commented Oct 2, 2021

seems to occur only when aggmode "avg". when using max it seems to work fine.

@J-A-U
Copy link
Collaborator

J-A-U commented Oct 2, 2021

Use of duplicates is only rational for energy. Not for values like voltage, current, temperature or power.
For energy the aggmode to use is max, avg is for voltage, current, temperature or power.

Therefore i assume it's intentional.

@J-A-U J-A-U closed this as completed Oct 2, 2021
@astrakid
Copy link
Contributor Author

astrakid commented Oct 2, 2021

the value i want to avoid duplicates is the total energy consumption. and it shouldn't matter if avg or max is used, when the values are duplicates they shouldn't be written to the database. i don't agree with you.
because it works with "max" i think the values are somewhere different when compared but before rounded!

@r00t-
Copy link
Contributor

r00t- commented Oct 2, 2021

looking at the code,
"duplicates" is implemented in the "volkszaehler" api (it would have no effect with other APIs, confusingly).
being in the API, it only sees the data after it has gone through aggregation.

also, it does an equality comparison on floats, which is a no-no from a programming point of view:

(r.value() != _lastReadingSent->value())) {

so your theory of the average aggregation breaking deduplication due to rounding is probably right.

@r00t-
Copy link
Contributor

r00t- commented Oct 2, 2021

@astrakid:
can you test the fix in https://github.com/volkszaehler/vzlogger/pull/501/files ?

@astrakid
Copy link
Contributor Author

astrakid commented Oct 3, 2021

@astrakid: can you test the fix in https://github.com/volkszaehler/vzlogger/pull/501/files ?

looks good!!!
thanks.

@r00t-
Copy link
Contributor

r00t- commented Oct 3, 2021

great!

#501 is not to be merged, because it's a very naive fix.
i think the tolerance for the comparison (0.000...01) needs to be chosen in relation to the input values, for this to work with numbers in any range,
but my short research did not lead to a best-practice algorithm to use.
(probably needs a float number with it's exponent derived from the exponent of the inputs.)

@astrakid
Copy link
Contributor Author

astrakid commented Oct 3, 2021

for vzlogger i think it should be enough to compare for 1/1000, but even if more precision would be used - if we round the results we should always get the same value which wouldn't be written to the database.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants