-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate negative fluxes #26
Conversation
5f71d43
to
e6fb710
Compare
Even if the pipeline keeps going with calculations, the results in this case are wrong, right? Do you want to allow that? Or simply let the pipeline exit loudly raising an error? In the second way, I would expect these problems to go less unnoticed than if you let the pipeline run "normally". |
These errors or invalid values are probably caused by bad data (we have to make sure, though). If the pipeline continues, and the data become NaNs, they will be automatically flagged. We can still see that there were errors by looking at the error messages. However if the pipeline just stops on the first error, every time there is a single bad point, we will loose whatever data was going to reduce after (worst case, the whole night). Maybe we could implement an option stop_on_errors that jumps to the interpreted if activated, or just re-raises exceptions instead of continuing with a simple message. But in general it should be resilent to errors so no good data is lost. |
Let's write here whatever instances of this kind of errors we find and debug, so we can make an informed decision about how to treat them. I will add them in the Issue description as they come. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These errors or invalid values are probably caused by bad data (we have to make sure, though). If the pipeline continues, and the data become NaNs, they will be automatically flagged. We can still see that there were errors by looking at the error messages. However if the pipeline just stops on the first error, every time there is a single bad point, we will loose whatever data was going to reduce after (worst case, the whole night).
Maybe we could implement an option stop_on_errors that jumps to the interpreted if activated, or just re-raises exceptions instead of continuing with a simple message. But in general it should be resilent to errors so no good data is lost.
Fair enough. Go ahead with the merge
Better in the issue #24 than in this PR that will be closed when merging. |
In this PR:
Let's keep issue #24 open until we study better where it comes from exactly.