Skip to content

Add UODE Lotka-Volterra notebook#153

Open
markusheinonen wants to merge 5 commits intomainfrom
mh-uode
Open

Add UODE Lotka-Volterra notebook#153
markusheinonen wants to merge 5 commits intomainfrom
mh-uode

Conversation

@markusheinonen
Copy link

  • Observe both prey and predator (H=I_2) at dt=0.2 (150 points)
  • Laplace(0, 0.005) prior for SINDy-like sparsity on Theta
  • SVI config: lr=1e-3, cov_rescaling=2.0, lr_decay=0.5, 2000 steps
  • Correctly recovers xy interaction coefficients (Frobenius err ~0.077)
  • Includes debug CLI script for systematic experiments

- Observe both prey and predator (H=I_2) at dt=0.2 (150 points)
- Laplace(0, 0.005) prior for SINDy-like sparsity on Theta
- SVI config: lr=1e-3, cov_rescaling=2.0, lr_decay=0.5, 2000 steps
- Correctly recovers xy interaction coefficients (Frobenius err ~0.077)
- Includes debug CLI script for systematic experiments
@mattlevine22
Copy link
Collaborator

mattlevine22 commented Mar 19, 2026

Was able to get this notebook to run "perfectly" in #155 .

To get things to work better

  1. Use ContinuousTimeEnKF instead of EKF, as EnKF is more numerically robust
  2. Revert to simple ADAM with lr=1e-3 and 4000 epochs. At 1000 epochs, things are pretty close, and by 4000 epochs, the answer has fully sparsified
  3. There are some bugs when trying to do MAP-Predictive Filter here...observe that svi_result.params['Theta_auto_loc] != Theta_inferred`

Instead of predictive_filter = Predictive(data_conditioned_model, params=svi_result.params, num_samples=1), do
predictive_filter = Predictive(data_conditioned_model, params=guide.median(svi_result.params), num_samples=1)

mattlevine22 and others added 3 commits March 21, 2026 21:50
* add MLL-truth computation and more visualizations

* working lv UODE---use EnKF + 2000 adam epochs with lr=1e-3 + fixed posterior predictive filtering plots

* reverted to original timeseries length
@DanWaxman
Copy link
Collaborator

I also just updated this in lieu of #135

@mattlevine22
Copy link
Collaborator

Now that it is working (mostly by using EnKF and running ADAM for longer), we should think about where to put this example.

  1. Should this go in Docs as a deep dive? Or should It be in the dynestyx-examples repo?
  • I like the idea of us having a reimplementation of "Universal ODEs" in our repo (to show that most of the popular methods can be set up easily in Dynestyx)
  1. Do we want to keep the debugging script / put it in dynestyx-examples?

@DanWaxman
Copy link
Collaborator

1. Should this go in Docs as a deep dive? Or should It be in the `dynestyx-examples` repo?

Hmm I think this is a good open question. I think there are two choices for how we use dynestyx-examples: (1) have every example there; (2) keep minimal examples (e.g., that implement mainstream ideas in a single notebook) in dynestyx, and delegate more complex multi-file case studies to dynestyx-examples. I lean more towards (2), in which case we may still consider this a "deep dive."

In any case, I think it's a low-cost decision, it's easy to revert this later.

2. Do we want to keep the debugging script / put it in dynestyx-examples?

I would get rid of the debugging script if we keep it in dynestyx, and keep it if we put it in dynestyx-examples. I don't want duplicated code across repos, though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants