You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dataset is available through Huggingface here: https://huggingface.co/datasets/excarta/madis2020
This paper goes more into verifying how well data driven model perform, specifically against real observations as well, compared to the common benchmark of ERA5 reanalysis. The data in the dataset is from MADIS.
Context
It would be good to compare what these models can do against observations.
The text was updated successfully, but these errors were encountered:
I see a very strange results in this paper: the errors is not increase respect to lead time
The good idea is using the NOAA-ISD dataset for this verification - it is globally and contains the quality control checks https://www.ncei.noaa.gov/data/global-hourly/
If this issue is still open, I would like to take it on. I have perused the paper and what I understand is that the methods evaluation of DDWPs tests how well the model can replicate the data it has ingested. However, the real
world has unforeseen circumstances and thus, what is required here is to evaluate DDWPs with real-world scenarios.
Arxiv/Blog/Paper Link
https://arxiv.org/pdf/2305.00048.pdf
Detailed Description
Dataset is available through Huggingface here: https://huggingface.co/datasets/excarta/madis2020
This paper goes more into verifying how well data driven model perform, specifically against real observations as well, compared to the common benchmark of ERA5 reanalysis. The data in the dataset is from MADIS.
Context
It would be good to compare what these models can do against observations.
The text was updated successfully, but these errors were encountered: