Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better test coverage #47

Open
nossleinad opened this issue Jan 22, 2025 · 1 comment
Open

Better test coverage #47

nossleinad opened this issue Jan 22, 2025 · 1 comment

Comments

@nossleinad
Copy link
Collaborator

#45 suggests our coverage is poor. In this case there's probably not a specific @test we want to perform. Rather, there should be a call to that constructor somewhere in the tests (that causes our unit tests to error rather than fail).

@murrellb
Copy link
Member

This also suggests our coverage is poor:

Image

Given that we expect to have a proliferation of models, if there are standard default constructors for everything that will set things up with sensible defaults, we can use a similar strategy to Optimisers.jl (https://github.com/FluxML/Optimisers.jl/blob/master/test/rules.jl), and have the tests construct the model with defaults, and simulate over a tree with it. And maybe do a check if inference is allowed (maybe we can have backward! return false for sim-only models, or force all sim only models to share a parent type?) and then do an LL calc from the data the model simmed.

It won't catch everything but it'll be a good way to force things to be more standardized and catch major issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants