diff --git a/exercises/solved_notebooks/P6_calib/calib_fermenter_monod_sol.jl b/exercises/solved_notebooks/P6_calib/calib_fermenter_monod_sol.jl index 19bb6ba..d4b4664 100644 --- a/exercises/solved_notebooks/P6_calib/calib_fermenter_monod_sol.jl +++ b/exercises/solved_notebooks/P6_calib/calib_fermenter_monod_sol.jl @@ -105,7 +105,7 @@ end md""" We have previously used the following parameter values: -- `μmax = 0.40`$\mathrm{h^{-1}}$, `Ks = 0.015`$\mathrm{g/L}$, `Sin = 0.22`$\mathrm{g/L}$ +- `μmax = 0.40`$\mathrm{h^{-1}}$, `Ks = 0.015`$\mathrm{g/L}$, `Sin = 0.022`$\mathrm{g/L}$ - `Y = 0.67`, `Q = 2.0`$\mathrm{L/h}$, `V = 40.0`$\mathrm{L}$ Furthermore, suppose that at $t = 0\;h$ no substrate $S$ is present in the reactor but that there is initially some biomass with a concentration of `0.0005`$\mathrm{g/L}$. diff --git a/exercises/solved_notebooks/P6_calib/calib_irrigation_sol.jl b/exercises/solved_notebooks/P6_calib/calib_irrigation_sol.jl index 0eab9d9..5ebeca4 100644 --- a/exercises/solved_notebooks/P6_calib/calib_irrigation_sol.jl +++ b/exercises/solved_notebooks/P6_calib/calib_irrigation_sol.jl @@ -131,7 +131,7 @@ tspan = (0.0, 150.0) # ╔═╡ 923d04ce-b4d2-44b0-afff-7062c4628ad0 md""" -Declare the Turing model. Make sure you take both experiments into account for optimizing $k$ and $S_{max}$. +Declare the Turing model. Make sure you take both experiments into account for optimizing $k$ and $S_{max}$. Based on literature, you can assume that the value of $S_{max}$ lies somewhere between 100 and 200 mm. """ # ╔═╡ 481eb8b9-5de2-4f68-b06a-ec18e054c9f5 diff --git a/exercises/student_notebooks/P6_calib/calib_fermenter_monod.jl b/exercises/student_notebooks/P6_calib/calib_fermenter_monod.jl index f678b51..e251900 100644 --- a/exercises/student_notebooks/P6_calib/calib_fermenter_monod.jl +++ b/exercises/student_notebooks/P6_calib/calib_fermenter_monod.jl @@ -1,21 +1,14 @@ ### A Pluto.jl notebook ### -# v0.20.4 +# v0.20.21 using Markdown using InteractiveUtils # ╔═╡ c54dae10-60af-4141-b56d-ed61cb0ced8a -begin - # add this cell if you want the notebook to use the environment from where the Pluto server is launched - using Pkg - Pkg.activate("..") -end +using Pkg; Pkg.activate("..") # ╔═╡ 245ca9d0-10f9-11ef-0ef6-a73594e96db9 -using Markdown - -# ╔═╡ 78a25bef-31e5-45ef-b0ba-b9a8c8a9edeb -using InteractiveUtils +using Markdown, InteractiveUtils # ╔═╡ 16438e07-1b2b-467e-822a-081d19cae92b using Catalyst, OrdinaryDiffEq @@ -33,16 +26,12 @@ md""" # ╔═╡ 595ea8ee-bc67-4696-9232-982612fb554d md""" -In one of the previous practicals we were introduced to a fermenter in which biomass $X$ [$g/L$] grows by breaking down substrate $S$ [$g/L$]. The reactor is fed with an inlet flow rate $Q_{in}$ [$L/h$], which consists of a (manipulable) inlet concentration of substrate $S_{in}$ [$g/L$]. This process was modelled using Monod kinetics, resulting in the model below: +In one of the previous practica we were introduced to a fermenter in which biomass $X$ [$\mathrm{g/L}$] grows by breaking down substrate $S$ [$\mathrm{g/L}$]. The reactor is fed with an inlet flow rate $Q_{in}$ [$\mathrm{L/h}$], which consists of a (manipulable) input concentration of substrate $S_{in}$ [$\mathrm{g/L}$]. This process was modelled using Monod kinetics: $$\begin{eqnarray*} -S + X \xrightarrow[\quad\quad]{k} (1 + Y) \, X \quad\quad\quad\quad \textrm{with} \quad k = \cfrac{\mu_{max}}{S + K_s} +S + X \xrightarrow[\quad\quad]{k} (1 + Y) \, X \quad\quad\quad\quad \textrm{with} \quad k = \cfrac{\mu_{max}}{S + K_s} \, . \end{eqnarray*}$$ """ -# $$\begin{eqnarray*} -# %S \xrightarrow[\quad\quad]{\beta} Y \, X -# S \xrightarrow[\quad\quad]{r} Y \, X \quad\quad\quad\quad r = \mu \, X \quad \textrm{with} \quad \mu = \mu_{max} \, \cfrac{S}{S + K_s} -# \end{eqnarray*}$$ # ╔═╡ 824db995-7a66-4719-a534-7e0f6dec90b5 @@ -51,20 +40,16 @@ The *reaction network object* for this model could be set-up as: """ # ╔═╡ 245c2636-95da-4c76-8b03-c4d20bbabb48 -fermenter_monod = @reaction_network begin - μmax/(S+Ks), S + X --> (1 + Y)*X - # Alternatives: - # X * mm(S, μmax, Ks), S => Y*X - # mm(S, μmax, Ks)*X, S + X => (1 + Y)*X - Q/V, (S, X) --> 0 - Q/V*Sin, 0 --> S -end +# fermenter_monod = @reaction_network begin +# @species missing +# @parameters missing +# missing +# missing +# missing +# end # ╔═╡ 956790e2-6cac-46c9-886e-24d5aceae1c5 -convert(ODESystem, fermenter_monod, combinatoric_ratelaws=false) - -# ╔═╡ 68f9ecb3-15b0-4a53-8864-5dac13a89e95 -parameters(fermenter_monod) +# convert(missing, missing) # ╔═╡ de8ddc14-8f82-403d-8f42-29673ef2a722 md""" @@ -98,7 +83,6 @@ Make a scatter plot of the measured data for both $S$ and $X$. Use the following """ # ╔═╡ 918fd524-81fa-4aff-a403-37402e47235b -# Uncomment and complete the instruction # begin # missing # missing @@ -108,38 +92,22 @@ Make a scatter plot of the measured data for both $S$ and $X$. Use the following md""" We have previously used the following parameter values: -- $\mu_{max} = 0.40\;h^{-1}$, $K_s = 0.015\;g/L$, $S_{in} = 0.022\;g/L$ -- $Y = 0.67$, $Q = 2.0\;L/h$, $V = 40.0\;L$ +- `μmax = 0.40`$\mathrm{h^{-1}}$, `Ks = 0.015`$\mathrm{g/L}$, `Sin = 0.022`$\mathrm{g/L}$ +- `Y = 0.67`, `Q = 2.0`$\mathrm{L/h}$, `V = 40.0`$\mathrm{L}$ -Furthermore, suppose that at $t = 0\;h$ no substrate $S$ is present in the reactor but that there is initially some biomass with a concentration of $0.0005\;g/L$. - -Calibrate the parameter values for $\mu_{max}$ and $K_s$ using the aforementioned measurement data for $S$ and $X$ in a timespan of $[0, 100]\,h$. Take the values above as initial values for $\mu_{max}$ and $K_s$. -""" +Furthermore, suppose that at $t = 0\;h$ no substrate $S$ is present in the reactor but that there is initially some biomass with a concentration of `0.0005`$\mathrm{g/L}$. -# ╔═╡ 7a227eaf-18d0-44f4-ac4b-f529e81c7471 -md""" -Create an `ODEProblem`. Use the aforementioned values as initial values for the problem. +Calibrate the parameter values for $\mu_{max}$ and $K_s$ using the aforementioned measurement data for $S$ and $X$ in a timespan of `[0, 100]`$\mathrm{h}$. Take the values above as initial values for $\mu_{max}$ and $K_s$. """ -# ╔═╡ 6375478f-1af9-4fd2-b6f3-101a6f796f2d -# u0 = missing # Uncomment and complete the instruction - -# ╔═╡ 38fe8304-af61-40a7-ac86-480dfb892185 -# tspan = missing # Uncomment and complete the instruction - -# ╔═╡ 87482f88-8413-4820-9613-7941f3d61bd7 -# params = missing # Uncomment and complete the instruction - -# ╔═╡ 94f3bd7b-5c2c-4661-a0ab-2cdaf2cd6743 -# oprob = missing # Uncomment and complete the instruction - # ╔═╡ f6a8f134-6db0-4d74-8af5-82826347d8f0 md""" -Declare the Turing model. Use `InverseGamma` for the standard deviations of the measurements and `LogNormal` for `μmax` and `K`. +Declare the Turing model. Assume the following for the priors: +- The measurement error is an unknown positive value, but probably near $0$. +- The parameters `μmax` and `K` are both positive and expected to be in $[0.0, 1.0]$, presumably around $0.1$. """ # ╔═╡ 4c28a66a-ee2c-42a2-95c7-ea4ddb6a232d -# Uncomment and complete the instruction # @model function fermenter_fun(t_meas) # σ_S ~ missing # σ_X ~ missing @@ -152,29 +120,35 @@ Declare the Turing model. Use `InverseGamma` for the standard deviations of the # X_s ~ missing # end +# ╔═╡ 5928a9f3-f33c-4689-a6e5-637447f420d6 +md""" +!!! tip + This model can change between being non-stiff and stiff based on the sampled parameter values. You can use an auto-switching solver such as `AutoTsit5(Rosenbrock23())` here to make calibration more stable. +""" + # ╔═╡ 3136b15d-5078-4bcd-954b-e89bcb8aed1b md""" -Provide the time measurements to the defined function and instantly condition the model with the measurements of $S$ and $X$: +Provide the measurements to the Turing model. """ # ╔═╡ 6a508a62-61b9-4273-8e45-b26f594e8da9 -# fermenter_cond_mod = missing # Uncomment and complete the instruction +# fermenter_inf = missing # ╔═╡ 63420055-55f8-4def-8b0e-11ea61483010 md""" -Optimize the priors ($\sigma_S$, $\sigma_X$, $\mu_{max}$ and $K_s$). Do this with `MLE` method and Nelder-Mead. Store the optimization results in `results_mle`. If necessary, run the optimization again if you get any errors. +Optimize the likelihood of the parameters ($\sigma_S$, $\sigma_X$, $\mu_{max}$ and $K_s$) using the NelderMead optimizer. Store the optimization results in `results_mle`. """ # ╔═╡ d52c9da8-d8a4-4db0-ac6d-6d16ccf4775c -# results_map = missing # Uncomment and complete the instruction +# results_mle = missing # ╔═╡ e1b0ee01-f16c-40e9-a0f9-80072d690936 md""" -Visualize a summary of the optimized parameters. +Visualize a summary of the optimized parameters. Beware that this may take a lot of time... """ # ╔═╡ f2d7daf8-8218-446d-b1d2-e9e05aeadfd9 -# missing # Uncomment and complete the instruction +# missing # ╔═╡ 23d58bb1-d077-402e-8bee-3866c68e069a md""" @@ -182,10 +156,10 @@ Get the optimized values and assign them to `μmax_opt` and `Ks_opt`. """ # ╔═╡ 7b3a3677-b251-43c1-b125-6d6ff1a11ea3 -# μmax_opt = missing # Uncomment and complete the instruction +# μmax_opt = missing # ╔═╡ fa77bcbe-2ddc-4113-8f6a-4a18d219da9e -# Ks_opt = missing # Uncomment and complete the instruction +# Ks_opt = missing # ╔═╡ 05d13a48-adc8-4e24-a6e4-be24af2c7a59 md""" @@ -198,7 +172,7 @@ Set up parameter values with optimized parameter values: """ # ╔═╡ 75cf59ed-af8e-4e8a-8ed2-1f3bf4d386d0 -# params_opt = missing # Uncomment and complete the instruction +# params_opt = missing # ╔═╡ 4e8870dc-2da6-4b80-82d6-26c7ceedad7d md""" @@ -206,10 +180,10 @@ Create an ODEProblem and solve it. Use `Tsit5()` and `saveat=0.5`. """ # ╔═╡ 853c1a92-d50f-4b05-9ed3-d3ee1656665a -# oprob_opt = missing # Uncomment and complete the instruction +# oprob_opt = missing # ╔═╡ f45e8124-e942-438e-99c5-3032ccc01454 -# osol_opt = missing # Uncomment and complete the instruction +# osol_opt = missing # ╔═╡ 5a39b0e0-1ea1-4854-8e68-66d0d4bbf25c md""" @@ -217,13 +191,10 @@ Plot $S$ and $X$ simulated with the optimal and initial parameter values togethe """ # ╔═╡ d0156099-ad03-4711-ac0f-94882fb78266 -# Uncomment and complete the instruction # begin # missing # missing # missing -# missing -# missing # end # ╔═╡ 257ae1a9-6264-4122-80be-8022b4b7500c @@ -245,7 +216,6 @@ md""" # ╔═╡ Cell order: # ╠═245ca9d0-10f9-11ef-0ef6-a73594e96db9 -# ╠═78a25bef-31e5-45ef-b0ba-b9a8c8a9edeb # ╠═c54dae10-60af-4141-b56d-ed61cb0ced8a # ╠═16438e07-1b2b-467e-822a-081d19cae92b # ╠═295caa68-db27-4c9b-bc34-86ab088fec24 @@ -255,7 +225,6 @@ md""" # ╟─824db995-7a66-4719-a534-7e0f6dec90b5 # ╠═245c2636-95da-4c76-8b03-c4d20bbabb48 # ╠═956790e2-6cac-46c9-886e-24d5aceae1c5 -# ╠═68f9ecb3-15b0-4a53-8864-5dac13a89e95 # ╟─de8ddc14-8f82-403d-8f42-29673ef2a722 # ╟─b7b7d58f-d406-4596-b834-ced6d8fada83 # ╠═99c6f31a-0968-4804-9980-71fcc1af1f49 @@ -264,13 +233,9 @@ md""" # ╟─6c481447-28c6-4530-bf2c-64762121bc71 # ╠═918fd524-81fa-4aff-a403-37402e47235b # ╟─ef977370-06ee-4a73-85e2-609a744167d3 -# ╟─7a227eaf-18d0-44f4-ac4b-f529e81c7471 -# ╠═6375478f-1af9-4fd2-b6f3-101a6f796f2d -# ╠═38fe8304-af61-40a7-ac86-480dfb892185 -# ╠═87482f88-8413-4820-9613-7941f3d61bd7 -# ╠═94f3bd7b-5c2c-4661-a0ab-2cdaf2cd6743 # ╟─f6a8f134-6db0-4d74-8af5-82826347d8f0 # ╠═4c28a66a-ee2c-42a2-95c7-ea4ddb6a232d +# ╟─5928a9f3-f33c-4689-a6e5-637447f420d6 # ╟─3136b15d-5078-4bcd-954b-e89bcb8aed1b # ╠═6a508a62-61b9-4273-8e45-b26f594e8da9 # ╟─63420055-55f8-4def-8b0e-11ea61483010 diff --git a/exercises/student_notebooks/P6_calib/calib_intro.jl b/exercises/student_notebooks/P6_calib/calib_intro.jl index 72f1c65..bdefe5f 100644 --- a/exercises/student_notebooks/P6_calib/calib_intro.jl +++ b/exercises/student_notebooks/P6_calib/calib_intro.jl @@ -4,21 +4,29 @@ using Markdown using InteractiveUtils -# ╔═╡ f8a92690-990b-4341-89e1-322adbcb8d1b -begin - # add this cell if you want the notebook to use the environment from where the Pluto server is launched - using Pkg - Pkg.activate("..") +# This Pluto notebook uses @bind for interactivity. When running this notebook outside of Pluto, the following 'mock version' of @bind gives bound variables a default value (instead of an error). +macro bind(def, element) + #! format: off + return quote + local iv = try Base.loaded_modules[Base.PkgId(Base.UUID("6e696c72-6542-2067-7265-42206c756150"), "AbstractPlutoDingetjes")].Bonds.initial_value catch; b -> missing; end + local el = $(esc(element)) + global $(esc(def)) = Core.applicable(Base.get, el) ? Base.get(el) : iv(el) + el + end + #! format: on end -# ╔═╡ a09f814a-0c6a-11ef-0e79-a50b01287d63 -using Markdown +# ╔═╡ f8a92690-990b-4341-89e1-322adbcb8d1b +using Pkg; Pkg.activate("..") -# ╔═╡ 7f521435-63ac-4178-a4aa-93d9c45fe820 -using InteractiveUtils +# ╔═╡ a09f814a-0c6a-11ef-0e79-a50b01287d63 +using Markdown, InteractiveUtils # ╔═╡ 015050b3-3339-4b1a-ad7d-c358cce73675 -using Catalyst, OrdinaryDiffEq +using Catalyst, ModelingToolkit, OrdinaryDiffEq + +# ╔═╡ a83f424f-1bbb-4e71-af81-302c0ce68907 +using ModelingToolkit: t_nounits as t, D_nounits as D # ╔═╡ dbfe4800-0974-4ca1-bb0a-d8803409a98b using Turing, StatsPlots, StatsBase @@ -46,18 +54,22 @@ In the models discussed in the previous sessions, we always knew the values of a # ╔═╡ 987f0a4d-e416-4ceb-adbe-3dcdca9d0996 md""" -The search of optimal parameter values usually involves a function, such as a loss function, a (log) likelihood function or a posterior distribution function. In this session we will be (mainly) using the MLE (Maximum Likelihood Estimation) and MAP (Maximum A Posteriori estimation) methods. +The search of optimal parameter values usually involves a function, such as a loss function, a (log) likelihood function or a posterior distribution function. In this session we will be (mainly) using the MLE (Maximum Likelihood Estimation) and MAP (Maximum A Posteriori estimation) methods, which respectively try to maximize the likelihood function and the posterior probability of the data. -In the MLE method, a likelihood function of a given probability density function is maximized during the search of optimal parameter values of a model in order to fit experimental data. The parameter values are considered unknown but viewed as fixed points. +In order to understand the difference, recall Bayes' theorem applied to a set of parameters $θ$ and data $D$: +```math +P(θ \mid D) = \frac{P(D \mid θ) \, P(θ)}{P(D)} +``` +In the MLE method, the likelihood function $P(D \mid θ)$ (= the probability of the data given the parameters) is maximized during the search of optimal parameter values of a model in order to fit experimental data. The parameter values are considered unknown but viewed as fixed points. -In the MAP method, a posterior distribution function is maximized. Instead of viewing the parameter values as fixed points, they are now treated as random variables in the model which follow a prior distribution. In other words, we have prior belief in which distribution these parameters come from (Normal, Beta, etc). Once new data comes in, we update our prior belief, leading to a posterior belief. Hence, we now have a better idea from which distribution these parameters come. +In the MAP method, the posterior probability $P(θ \mid D)$ (= the probability of the parameters given the data) is maximized instead. Instead of viewing the parameter values as fixed points, they are now treated as random variables in the model which follow a prior distribution. In other words, we have prior belief in which distribution these parameters come from (Normal, Beta, etc). Once new data comes in, we update our prior belief, leading to a posterior belief. Hence, we now have a better idea from which distribution these parameters come. -One caveat with the MAP method is that it only considers the most likely point without taking into account other values of parameters from posterior distribution, which leads to a huge loss of useful information in posterior distribution. A better, yet computationally exhaustive method is using the MCMC (Markov chain Monte Carlo) sampling methods. Here we will use the NUTS sampler in this session. +One caveat with the MLE and MAP methods are that they only return a point estimate of the optimal value, which gives no information on how certain we are about this value. A more informative, yet computationally exhaustive method is using the MCMC (Markov chain Monte Carlo) sampling methods, which approximate the entire posterior distribution of the unknown values. """ # ╔═╡ 75efff36-8da7-4d04-afa2-a2f8324bc103 md""" -In this notebook we will calibrate the different parameters involved in the grass growth models. Therefore, we will implement an objective function for each model and then minimizing them using an optimization algorithm. To illustrate this concept, we first revisit the three simple models modelling the grass growth yield. +In this notebook we will calibrate the different parameters involved in the grass growth models. To illustrate this concept, we first revisit the three simple models modelling the grass growth yield. """ # ╔═╡ 3dcb9c9d-370b-4031-b7c0-cee80742557a @@ -71,11 +83,11 @@ In this notebook, three different models will be used, each modelling the yield - Logistic growth model: $\cfrac{dW}{dt} = \mu \left( 1 - \cfrac{W}{W_f} \right) W$ - Exponential growth model: $\cfrac{dW}{dt} = \mu \left( W_f - W \right)$ -- Gompertz growth model: $\cfrac{dW}{dt} = \left( \mu - D \ln(W) \right) W$ +- Gompertz growth model: $\cfrac{dW}{dt} = \left( \mu - d \ln(W) \right) W$ -with output $W$ the grass yield, and $W_f$, $\mu$ and $D$ parameters. The table below show the parameter values and the initial condition that will be used as initial values for the optimization algorithm: +with output $W$ the grass yield, and $W_f$, $\mu$ and $d$ parameters. The table below shows some typical parameter values and initial conditions for grasslands similar to the one we observed, which we can use as prior information. -| | $\mu$ | $W_f$ | $D$ | $W_0$ | +| | $\mu$ | $W_f$ | $d$ | $W_0$ | |:----------- |:----------:|:-----------:|:------------:|:------------:| | Logistic | 0.07 | 10.0 | | 2.0 | | Exponential | 0.02 | 10.0 | | 2.0 | @@ -97,43 +109,6 @@ md""" Variables containing the initial condition and parameters values will be defined later in the objective function. """ -# ╔═╡ 9a5bc72b-346d-4e95-a873-783037ed98bc -md""" -### Logistic growth model - -We will illustrate the calibration with the logistic growth model: -""" - -# ╔═╡ ba56adb1-9405-40d5-be48-4273b42ab145 -growth_log = @reaction_network begin - μ*(1-W/Wf), W --> 2W -end - -# ╔═╡ 6e3e53ea-4fe7-4a34-8b00-cbf8d63a3203 -osys_log = convert(ODESystem, growth_log) - -# ╔═╡ f4748167-b635-47a1-9015-32e1258c0afa -md""" -Check the order of the parameters: -""" - -# ╔═╡ a022b2ab-68a0-40ca-b914-7a2adcf4ae39 -parameters(growth_log) - -# ╔═╡ acccb2fa-12b2-4fc7-91e3-58a4b1a02892 -md""" -Next, we will need to create an `ODEProblem` in advance before we can optimize some of its parameters. We will provide the values in the aforementioned table as initial values for the problem. -""" - -# ╔═╡ e54ea8d1-0854-44fa-aed8-45d106e921e4 -u0_log = [:W => 2.0] - -# ╔═╡ 8d96eb17-ce19-4523-916f-3cd0441a16ca -params_log = [:μ => 0.07, :Wf => 10.0] - -# ╔═╡ 5c9db9df-0cbd-41ac-afe9-fb5616c967be -oprob_log = ODEProblem(growth_log, u0_log, tspan, params_log) - # ╔═╡ 1aa44f2b-6f33-437f-b9dd-89762d9f28ea md""" ### The measurement data @@ -167,39 +142,147 @@ scatter(t_meas, W_meas, title="Grass growth data", xlims=(0, 100), ylims=(0, 14)) -# ╔═╡ d75246d4-e03b-4684-be7d-4bcfb61ed7ef +# ╔═╡ 50e6af5b-04f4-495c-853b-c746fd27254f +md"## Calibration with Turing" + +# ╔═╡ a665740a-ea94-4452-8cbc-a1647319dfce md""" -### Declaration of the Turing model +We will be using the familiar Turing framework to perform optimisation, which consists roughly of the following steps: +- Define your model as a Turing model, with the following components: + - The observed input data as input of the model + - Priors for all unknown values such as initial values, parameter values and the measurement error + - An (ODE-based) model that predicts values of the output variable based on the observed input values and the unknown initial values, parameter values, etc. + - The relationship between the observed outputs and the predicted outputs, which often comes down to specifying the measurement error +- Instantiate the Turing model and condition it on the observed values of the output variable +- Call an optimisation algorithm on the Turing model +- Extract the optimized parameters +- Visualise the results +""" + +# ╔═╡ b2f20a9a-0613-4bb4-a1d3-df1620aef105 +md"### The priors" + +# ╔═╡ 6852f166-350e-4434-8304-382c01113ba1 +md""" +During calibration, the prior distributions have two roles: +1. Define for every unknown quantity **the bounds**: the optimizer will only search within the domain of the prior distribution. Note: you can add additional bounds to any existing distribution using the `truncated` function. +1. Define part of the **posterior probability** of observing the data given the parameter values. In other words, the theoretically optimal values depend on your prior distributions. + +**Note that the second only applies to methods that take prior information into account, which Maximum Likelihood Estimation (MLE) does not.** +""" + +# ╔═╡ 4f78b2e8-9132-4175-b9ae-9f5979649e90 +md"#### Choice of priors for the grass growth models" + +# ╔═╡ 557e69c5-d0c6-41ee-b3df-851bd1d16923 +md""" +In general for our grass models, we will need to define priors for the following: +- the measurement error (standard deviation) $\sigma_W$. +- the initial condition $W_0$. +- the parameters $\mu$ and either $W_f$ or $d$ (depending on the model). +""" + +# ╔═╡ 237934b7-c89a-4ef6-938f-4629c0fa4c95 +md""" +For the measurement error $\sigma_W$, a distribution with most of its probability density around 0 and a long positive tail is often a good choice. We will use the Exponential distribution here for this purpose. As for its parameters, considering our yield is in the order of 1 to 10 [t/ha], an expected value (and therefore standard deviation) of around 1 seems appropriate. """ -# ╔═╡ dd3a32f1-bdb6-44a3-acbe-f4269725c9e4 +# ╔═╡ 513ee94d-5d49-43d2-a1db-638fe9e2de1a +@bind example_mean_error Slider(0.01:0.01:10.0, show_value = true, default = 1) + +# ╔═╡ 8e163f77-b762-427a-b005-313284dc2e9e +plot(Exponential(example_mean_error), title = "Example prior for the measurement error", legend = false) + +# ╔═╡ d9c38e9d-12b8-4e98-8546-bbc123960e6f md""" -In the Turing model we will define our priors for the following magnitudes: -- the measurement error (standard deviation) $\sigma_W$, -- the initial condition $W_0$, and -- the parameters $\mu$ and $W_f$. +For the initial- and parameter values, we will default to log-normal distributions. These have the following properties that generally fit well with biological parameters, such as is the case in this notebook: +- A positive domain, which bounds possible values to the positive numbers +- Most of the probability density around the expected value +- A long postive tail, which allows for outliers -We will thereby take an Inverse Gamma prior distribution for $\sigma_W$ and LogNormal prior distributions for the initial condition and the parameters (assuming they lay in pre-defined range). +As the distribution of a variable whose logarithm is normally distributed, it does have some strange behaviour: the arguments of the distribution are the mean and standard deviation of the **logarithm**. To clarify, for $X \sim \mathrm{LogNormal(μ, σ)}$: +- $μ = E[\mathrm{log}(X)]$ +- $σ = \sqrt{\mathrm{Var}(\mathrm{log}(X))}$ + +Practically, this means **you need to specify the logarithm of the desired expected value and play around with the standard deviation**. For this exercise, we will choose an expected value based on the table discussed earlier in this section and keep the standard deviation to the default value of 1. Additionally, for some parameters we may want to truncate them to to keep them within (biologically and numerically) sensible bounds. +""" + +# ╔═╡ 39c70e22-86e0-467e-abab-24f99548b096 +@bind example_log_μ Slider(0.01:0.01:10.0, default = 0.1, show_value = true) + +# ╔═╡ ddef987d-59db-4adf-ae77-d4d32a41ddec +@bind example_log_σ Slider(0.01:0.01:3.0, default = 1.0, show_value = true) + +# ╔═╡ 54fb706b-9036-42b5-833e-4081f4c2179c +begin + example_param_prior = LogNormal(log(example_log_μ), example_log_σ) + + println("""Look at this strange distribution with + - mean = $(mean(example_param_prior)) + - median = $(median(example_param_prior)) + - σ = $(std(example_param_prior))""") + plot(example_param_prior, xlims = (0, 5*example_log_μ), title = "Example prior for the model parameters", legend = false) +end + +# ╔═╡ 9a5bc72b-346d-4e95-a873-783037ed98bc +md""" +## Example: the Logistic growth model +""" + +# ╔═╡ 4b17f7e6-26b0-425a-9f53-ccf1689639fe +md""" +We will illustrate the calibration with the logistic growth model. The latter can be done via a Catalyst reaction network model or via a model built with ModelingToolkit. As an example, we will show both possibilities for the logistic growth model. +""" + +# ╔═╡ 6736542c-5378-480e-a56b-65956b416225 +md""" +#### 1) Catalyst based model +""" + +# ╔═╡ ba56adb1-9405-40d5-be48-4273b42ab145 +# growth_log = @reaction_network begin +# @species W(t)=2.0 +# @parameters μ=0.07 Wf=10.0 +# μ*(1-W/Wf), W --> 2W +# end + +# ╔═╡ 1c311b6f-6b18-4170-b500-33a8e4d3cb2d +md""" +#### ̇2) ModelingToolkit based model +""" + +# ╔═╡ 50220e1f-8e42-44da-be03-08c11df967f0 +@variables W(t) + +# ╔═╡ 449286b0-0210-4c6b-b28f-fca87b52d674 +@parameters μ Wf d + +# ╔═╡ f720eac9-cb29-4eef-ac94-471395941c0f +# IF YOU UNCOMMENT THE FOLLOWING LINE, PLEASE COMMENT THE CATALYST MODEL FIRST!!! +@mtkbuild growth_log = ODESystem([D(W) ~ μ*(1 - W/Wf)*W], t) + +# ╔═╡ d75246d4-e03b-4684-be7d-4bcfb61ed7ef +md""" +### Declaration of the Turing model """ # ╔═╡ 8a9115eb-4044-4cab-a7db-39b5dd86c70d -@model function growth_log_inference(t_meas, W_meas) - σ_W ~ InverseGamma() - W0 ~ LogNormal() - μ ~ LogNormal() - Wf ~ LogNormal() +@model function growth_log_inference(t_meas) + σ_W ~ Exponential() + W0 ~ LogNormal(log(1)) + μ ~ LogNormal(log(0.1)) + Wf ~ truncated(LogNormal(log(10)), lower = 0.1) # prevent zeros in denominator of our model u0_log = [:W => W0] - params_log = [:μ => μ, :Wf => Wf] - oprob_log = ODEProblem(growth_log, u0_log, tspan, params_log) + parms_log = [:μ => μ, :Wf => Wf] + oprob_log = ODEProblem(growth_log, u0_log, tspan, parms_log) osol_log = solve(oprob_log, AutoTsit5(Rosenbrock23()), saveat=t_meas) - W_meas ~ MvNormal(osol_log[:W], σ_W^2 * I) + W_s ~ MvNormal(osol_log[:W], σ_W) end # ╔═╡ 48c9f616-d298-40da-b917-225abd39b3d9 md""" Some remarks: - The time points are the ones from the measurements, therefore, we set: `saveat=t_meas`. -- We need to solve the ODE problem inside the Turing model function with values for $W_0$, $\mu$ and $W_f$ sampled from the distributions. Therefore, we need to remake our ODE problem with the appropriate initial and parameter values and solve it. - Depending on the priors, the parameter values of our ODE may vary wildly during calibration. As this can influence the stiffness of the system, it can be beneficial to use an ODE solver that automatically detects the stiffness of the system and switches to a stiff solver if necessary, such as `AutoTsit5(Rosenbrock23())`. We don't expect you to know when this is necessary, but if your calibration regularly gives instability errors, this may help! """ @@ -209,7 +292,7 @@ We will provide the measurements to the Turing model: """ # ╔═╡ 8b6534d6-776b-4285-8498-a9b34051facc -growth_log_inf = growth_log_inference(t_meas, W_meas) +growth_log_inf = growth_log_inference(t_meas) | (W_s = W_meas, ) # ╔═╡ a6972aef-63ad-401c-acf5-6d59f9fc6698 md""" @@ -231,11 +314,11 @@ results_log_mle = optimize(growth_log_inf, MLE(), NelderMead()) # ╔═╡ e55404ab-6762-4f39-bb42-9c195334a214 md""" -You can visualize a summary of the optimized parameters by piping them to `coeftable`: +You can visualize a summary of the optimized parameters by piping them to `coeftable`. Beware that this can take a lot of time... """ # ╔═╡ 80e7f6b8-7592-48a3-8587-f1953d1bfcd8 -results_log_mle |> coeftable +# results_log_mle |> coeftable # ╔═╡ a1ca7d0e-639c-42d4-be09-5c61a2008f29 md""" @@ -262,7 +345,7 @@ Setting up initial condition with optimized initial condition: """ # ╔═╡ 590b1006-0e37-4668-9b4a-3588fab45696 -u0_opt1_log = [:W => W0_opt1_log] +u0_opt1_log = [:W=>W0_opt1_log] # ╔═╡ 8ff28a2e-185d-4dca-ad3b-a0b1507646d6 md""" @@ -270,7 +353,7 @@ Setting up parameter values with optimized parameter values: """ # ╔═╡ 14d24cbd-3259-41bb-9013-b4fe25a3be4c -params_opt1_log = [:μ => μ_opt1_log, :Wf => Wf_opt1_log] +parms_opt1_log = [:μ=>μ_opt1_log, :Wf=>Wf_opt1_log] # ╔═╡ 6c134677-3ec1-4e0a-88b5-01341a096675 md""" @@ -278,7 +361,7 @@ Next, we create an ODEProblem and solve it: """ # ╔═╡ 7a81f4a0-8f7c-4e05-9c3f-2438eab9b691 -oprob_opt1_log = ODEProblem(growth_log, u0_opt1_log, tspan, params_opt1_log) +oprob_opt1_log = ODEProblem(growth_log, u0_opt1_log, tspan, parms_opt1_log) # ╔═╡ ac80099b-d8f0-4eba-809d-d482bd354d35 osol_opt1_log = solve(oprob_opt1_log, Tsit5(), saveat=0.5) @@ -310,11 +393,11 @@ results_log_map = optimize(growth_log_inf, MAP(), NelderMead()) # ╔═╡ 07efb21a-0dac-46d5-b9e1-8f4757c6cedf md""" -You can visualize a summary of the optimized parameters by piping them to `coeftable`: +You can visualize a summary of the optimized parameters by piping them to `coeftable`. Beware that this can take a lot of time... """ # ╔═╡ a93a7eeb-aad1-49f3-a05f-1512adb08b6b -results_log_map |> coeftable +# results_log_map |> coeftable # ╔═╡ ccc66805-efce-4f25-b76b-7cff23901ba4 md""" @@ -341,7 +424,7 @@ Setting up initial condition with optimized initial condition: """ # ╔═╡ a570ebab-64de-4934-b887-77a8e2fb42e8 -u0_opt2_log = [:W => W0_opt2_log] +u0_opt2_log = [:W=>W0_opt2_log] # ╔═╡ feee9cc1-ebe2-4c77-a11e-53ca00864ccf md""" @@ -349,7 +432,7 @@ Setting up parameter values with optimized parameter values: """ # ╔═╡ 73e86176-f314-463c-8821-51c0f3cc1a56 -params_opt2_log = [:μ => μ_opt2_log, :Wf => Wf_opt2_log] +parms_opt2_log = [:μ=>μ_opt2_log, :Wf=>Wf_opt2_log] # ╔═╡ c17c4e00-8688-4f9c-b0ae-741d70601aba md""" @@ -357,7 +440,7 @@ Next, we create an ODEProblem and solve it: """ # ╔═╡ f751b962-1810-429c-b34b-658b63fa9ba0 -oprob_opt2_log = ODEProblem(growth_log, u0_opt2_log, tspan, params_opt2_log) +oprob_opt2_log = ODEProblem(growth_log, u0_opt2_log, tspan, parms_opt2_log) # ╔═╡ 0fd058cc-9bd8-4e12-b5dd-79818519165b osol_opt2_log = solve(oprob_opt2_log, Tsit5(), saveat=0.5) @@ -387,17 +470,20 @@ We will use Markov chain Monte Carlo (MCMC) method in combination with the No U- # ╔═╡ 0c047043-3284-422a-9c88-2f4f4c170edf results_log_nuts = sample(growth_log_inf, NUTS(), 500) +# ╔═╡ 78a608a4-0ba7-400e-abb9-a9285c60681c +plot(results_log_nuts) # check convergence + # ╔═╡ 19c362cb-2764-41c9-a571-2e8e2bfcde93 -summarize(results_log_nuts) +# summarize(results_log_nuts) # ╔═╡ 93db47b2-34e8-43b4-beac-b5620fd444e7 -W0_opt3_log = summarize(results_log_nuts)[:W0, :mean] +W0_opt3_log = mean(results_log_nuts[:W0]) # ╔═╡ 68c71cd2-6cdc-4b80-b6e7-75bfea344295 -μ_opt3_log = summarize(results_log_nuts)[:μ ,:mean] +μ_opt3_log = mean(results_log_nuts[:μ]) # ╔═╡ d6b9eaba-1d43-4e56-8fa1-bb2f87f6fe79 -Wf_opt3_log = summarize(results_log_nuts)[:Wf ,:mean] +Wf_opt3_log = mean(results_log_nuts[:Wf]) # ╔═╡ b843ee5e-9618-4b50-93db-77e19b4be366 md""" @@ -410,7 +496,7 @@ Setting up initial condition with optimized initial condition: """ # ╔═╡ ecf4b951-9b5f-441a-89f5-b0ccd040ba02 -u0_opt3_log = [:W => W0_opt3_log] +u0_opt3_log = [:W=>W0_opt3_log] # ╔═╡ b92bf532-5610-4ce8-b469-154b20cc5956 md""" @@ -418,7 +504,7 @@ Setting up parameter values with optimized parameter values: """ # ╔═╡ e629fbd1-a7c7-4dc3-8b7d-bdca380fe5ad -params_opt3_log = [:μ => μ_opt3_log, :Wf => Wf_opt3_log] +parms_opt3_log = [:μ=>μ_opt3_log, :Wf=>Wf_opt3_log] # ╔═╡ a48df980-4847-418e-813d-1875d032ffd9 md""" @@ -426,7 +512,7 @@ Next, we create an ODEProblem and solve it: """ # ╔═╡ 348e7454-c6dc-4d1e-b18b-6b34bc2fc0fc -oprob_opt3_log = ODEProblem(growth_log, u0_opt3_log, tspan, params_opt3_log) +oprob_opt3_log = ODEProblem(growth_log, u0_opt3_log, tspan, parms_opt3_log) # ╔═╡ f71f0cf4-b63e-452c-b971-f30d7503f1e5 osol_opt3_log = solve(oprob_opt3_log, Tsit5(), saveat=0.5) @@ -457,68 +543,55 @@ Calibrate the initial condition and both parameters of the exponential growth mo # ╔═╡ cdab3079-04b0-4a44-b770-468c20e321e4 md""" -We have seen before that a possible *reaction network object* for the exponential growth model can be implemented as follows: +Implement a *reaction network object* using Catalyst for the exponential growth model: """ # ╔═╡ cf1a144e-09e9-42a3-b2a3-b8676a200a39 +# growth_exp = @reaction_network begin +# missing +# missing +# end growth_exp = @reaction_network begin μ*Wf, 0 --> W μ, W --> 0 end -# ╔═╡ 85c57cd3-bf00-437e-937b-f0c3b62f74ff -parameters(growth_exp) - -# ╔═╡ c6d373f4-f13c-4135-823d-ee8fbeb71b56 +# ╔═╡ d69ed5dd-80cd-427e-9245-e42576a4688d md""" -Create an `ODEProblem`. Use the values in the aforementioned table as initial values for the problem. Use the same `tspan` as before. +Convert the reaction network model into an ODE system to verify. """ -# ╔═╡ a97abaa7-b642-4201-86f1-5c8995b07536 -# u₀_exp = missing # Uncomment and complete the instruction -u0_exp = [:W => 2.0] - -# ╔═╡ 387730b4-bd06-492f-94e6-231bd68b3436 -# params_exp = missing # Uncomment and complete the instruction -params_exp = [:μ => 0.02, :Wf => 10.0] - -# ╔═╡ 290a7fe8-3b1e-423f-8b30-9bd8903d2e8f -# oprob_exp = missing # Uncomment and complete the instruction -oprob_exp = ODEProblem(growth_exp, u0_exp, tspan, params_exp) - -# ╔═╡ febe2b67-2a8f-4575-946d-30877bd5f2d4 -md""" -Use the same measurement data (`W_meas`, `t_meas`) as before. -""" +# ╔═╡ 8b429656-1c57-4278-9633-71311f400ed8 +# missing +convert(ODESystem, growth_exp) # ╔═╡ b4300e8a-8052-419b-98c8-0508ebee2393 md""" -Declare the Turing model. Take the same priors (and distributions) as before. +Declare the Turing model. Try to find sensible prior distributions. """ # ╔═╡ 2c6ae74c-2da4-4867-ad8a-f4e835101d63 -# Uncomment and complete the instruction # @model function growth_exp_inference(t_meas, W_meas) # σ_W ~ missing # W0 ~ missing # μ ~ missing # Wf ~ missing # u0_exp = missing -# params_exp = missing +# parms_exp = missing # oprob_exp = missing # osol_exp = missing # W_meas ~ missing # end -@model function growth_exp_inference(t_meas, W_meas) - σ_W ~ InverseGamma() - W0 ~ LogNormal() - μ ~ LogNormal() - Wf ~ LogNormal() +@model function growth_exp_inference(t_meas) + σ_W ~ Exponential() + W0 ~ LogNormal(log(1)) + μ ~ LogNormal(log(0.1)) + Wf ~ LogNormal(log(10)) u0_exp = [:W => W0] - params_exp = [:μ => μ, :Wf => Wf] - oprob_exp = ODEProblem(growth_exp, u0_exp, tspan, params_exp) + parms_exp = [:μ => μ, :Wf => Wf] + oprob_exp = ODEProblem(growth_exp, u0_exp, tspan, parms_exp) osol_exp = solve(oprob_exp, AutoTsit5(Rosenbrock23()), saveat=t_meas) - W_meas ~ MvNormal(osol_exp[:W], σ_W^2 * I) + W_s ~ MvNormal(osol_exp[:W], σ_W) end # ╔═╡ c6a5d453-d610-4f65-847c-c878dd41726c @@ -527,8 +600,8 @@ Provide the measurements to the Turing model. """ # ╔═╡ fff17cf7-173d-4f64-94a9-4bf46acc882d -# growth_exp_inf = missing # Uncomment and complete the instruction -growth_exp_inf = growth_exp_inference(t_meas, W_meas) +# growth_exp_inf = missing +growth_exp_inf = growth_exp_inference(t_meas) | (W_s = W_meas, ) # ╔═╡ eee55784-a641-445e-be75-0b19e2a94754 md""" @@ -536,17 +609,17 @@ Optimize the priors ($\sigma_W$, $W_0$, $\mu$ and $W_f$). Do this with `MLE` met """ # ╔═╡ 7844e4f5-3c7d-4b4b-beee-970c998c67a6 -# results_exp_mle = missing # Uncomment and complete the instruction +# results_exp_mle = missing results_exp_mle = optimize(growth_exp_inf, MLE(), NelderMead()) # ╔═╡ c81d0140-3f4e-4eb4-8a77-1f48c5e0ecbf md""" -Visualize a summary of the optimized parameters. +Visualize a summary of the optimized parameters. Beware: this can take a lot of time... """ # ╔═╡ 7456455b-4f31-488f-990f-6ce534038e08 -# missing # Uncomment and complete the instruction -results_exp_mle |> coeftable +# missing +# results_exp_mle |> coeftable # ╔═╡ b10c2ce4-d363-429c-a64c-ec29652137a5 md""" @@ -554,15 +627,15 @@ Get the optimized values and assign them to `W0_opt_exp`, `μ_opt_exp` and `Wf_o """ # ╔═╡ 23b629a6-6866-416a-a768-9617ce6301db -# W₀_opt_exp = missing # Uncomment and complete the instruction +# W₀_opt_exp = missing W0_opt_exp = coef(results_exp_mle)[:W0] # ╔═╡ 8788082d-f5d1-4385-8037-a0d360a841c7 -# μ_opt_exp = missing # Uncomment and complete the instruction +# μ_opt_exp = missing μ_opt_exp = coef(results_exp_mle)[:μ] # ╔═╡ 03a4fa85-08db-46d4-bb53-c0ccea90a211 -# Wf_opt_exp = missing # Uncomment and complete the instruction +# Wf_opt_exp = missing Wf_opt_exp = coef(results_exp_mle)[:Wf] # ╔═╡ 8f5e1413-227c-44fa-bb2d-3653cbc27e38 @@ -576,8 +649,8 @@ Set up initial condition with optimized initial condition: " # ╔═╡ 30602ff1-041b-4fca-bf8e-55ff57df9e37 -# u₀_opt_exp = missing # Uncomment and complete the instruction -u0_opt_exp = [:W => W0_opt_exp] +# u₀_opt_exp = missing +u0_opt_exp = [:W=>W0_opt_exp] # ╔═╡ 881011be-6434-416f-915b-3333e8dea32f md""" @@ -585,8 +658,8 @@ Set up parameter values with optimized parameter values: """ # ╔═╡ 5602b88a-07f8-438b-994c-65f11e17a0ba -# params_opt_exp = missing # Uncomment and complete the instruction -params_opt_exp = [:μ => μ_opt_exp, :Wf => Wf_opt_exp] +# parms_opt_exp = missing +parms_opt_exp = [:μ=>μ_opt_exp, :Wf=>Wf_opt_exp] # ╔═╡ 25be4255-0888-4ecd-a2fd-d66402c5cb50 md""" @@ -594,11 +667,11 @@ Create an ODEProblem and solve it. Use `Tsit5()` and `saveas=0.5`. """ # ╔═╡ 36e8a174-d526-45ee-b3c6-88d698ad5d5f -# oprob_opt_exp = missing # Uncomment and complete the instruction -oprob_opt_exp = ODEProblem(growth_exp, u0_opt_exp, tspan, params_opt_exp) +# oprob_opt_exp = missing +oprob_opt_exp = ODEProblem(growth_exp, u0_opt_exp, tspan, parms_opt_exp) # ╔═╡ 7594147d-b3da-4e0d-896d-41baacb6d7be -# osol_opt_exp = missing # Uncomment and complete the instruction +# osol_opt_exp = missing osol_opt_exp = solve(oprob_opt_exp, Tsit5(), saveat=0.5) # ╔═╡ 8e047be9-f0f0-4a75-91b5-c523f55f8c67 @@ -607,10 +680,9 @@ Plot $W$ simulated with the optimized initial value and parameter values togethe """ # ╔═╡ 1a9587aa-2356-48df-abcc-2ce874fa5d24 -# Uncomment and complete the instruction # begin -# missing -# missing +# missing +# missing # end begin plot(osol_opt_exp, label="Exponential growth", xlabel="t", @@ -627,62 +699,40 @@ Calibrate the initial condition and both parameters of the Gompertz growth model # ╔═╡ e754826a-7411-4072-b0dc-a4bad7a15f98 md""" -We have seen before that a possible *reaction network object* for the Gompertz growth model can be implemented as follows: +Implement a system using MTK for the Gompertz growth model.\ +*Hint*: no need to redefine the variable `W(t)` and the parameters `μ` and `d` because they are defined at the beginning of this notebook. """ -# ╔═╡ bc1edcbe-46eb-4531-9c5f-dee8d5dc2ff9 -growth_gom = @reaction_network begin - μ-D*log(W), W --> 2W -end - -# ╔═╡ 47fb9e4c-df6a-4811-9980-99d595a34908 -md""" -Create an `ODEProblem`. Use the values in the aforementioned table as initial values for the problem. Use the same `tspan` as before. -""" - -# ╔═╡ bbd150de-ff9a-4127-a0dd-2f9762f92b07 -# u₀_gom = missing # Uncomment and complete the instruction -u0_gom = [:W => 2.0] - -# ╔═╡ da5a0cbb-b033-46f1-a300-3954de138835 -# params_gom = missing # Uncomment and complete the instruction -params_gom = [:μ => 0.09, :D => 0.04] - -# ╔═╡ 73da8f53-c3af-43b0-9b23-60471f1e3587 -# oprob_gom = missing # Uncomment and complete the instruction -oprob_gom = ODEProblem(growth_gom, u0_gom, tspan, params_gom) - -# ╔═╡ b0e67564-efe8-4fb2-bcf2-a711b770244e -md""" -Use the same measurement data (`W_meas`, `t_meas`) as before. -""" +# ╔═╡ 6e85c08e-9a97-4897-9c81-89517f55e254 +@mtkbuild growth_gom = ODESystem([D(W) ~ (μ - d*log(W))*W], t) # ╔═╡ e5081280-d226-4834-8932-c89becd8313c md""" -Declare the Turing model. Take the same priors as before. +Declare the Turing model. Try to find sensible prior distributions. """ -# Take for $\sigma_W$ and $W_0$ the same priors (and distributions) as before, but take for $\mu$ a Uniform prior distribution in the range $[0, 2]$ and the same for $D$ but in the range $[0, 1]$. # ╔═╡ c739a908-2353-4e7a-8fbd-f640dc8cabe0 -# Uncomment and complete the instruction # @model function growth_gom_inference(t_meas, W_meas) # σ_W ~ missing # W0 ~ missing # μ ~ missing -# D ~ missing +# d ~ missing +# u0_gom = missing +# parms_gom = missing +# oprob_gom = missing # osol_gom = missing # W_meas ~ missing # end -@model function growth_gom_inference(t_meas, W_meas) - σ_W ~ InverseGamma() - W0 ~ LogNormal() - μ ~ LogNormal() - D ~ LogNormal() +@model function growth_gom_inference(t_meas) + σ_W ~ Exponential() + W0 ~ truncated(LogNormal(log(1.0)), lower = 1e-5) # prevent log(0) + μ ~ truncated(LogNormal(log(0.1)), upper = 1.0) # prevent overly large exponential growth from crashing the solver - this model has no carrying capacity to slow things down + d ~ LogNormal(log(0.1)) u0_gom = [:W => W0] - params_gom = [:μ => μ, :D => D] - oprob_gom = ODEProblem(growth_gom, u0_gom, tspan, params_gom) + parms_gom = [:μ => μ, :d => d] + oprob_gom = ODEProblem(growth_gom, u0_gom, tspan, parms_gom) osol_gom = solve(oprob_gom, AutoTsit5(Rosenbrock23()), saveat=t_meas) - W_meas ~ MvNormal(osol_gom[:W], σ_W^2 * I) + W_s ~ MvNormal(osol_gom[:W], σ_W) end # ╔═╡ 1d0383ad-54d6-4ff2-8555-def83bfff0e6 @@ -691,8 +741,8 @@ Provide the measurements to the Turing model. """ # ╔═╡ cd1cf2f8-9f7f-4ed4-9cb7-1a6efee68ab4 -# growth_gom_inf = missing # Uncomment and complete the instruction -growth_gom_inf = growth_gom_inference(t_meas, W_meas) +# growth_gom_inf = missing +growth_gom_inf = growth_gom_inference(t_meas) | (W_s = W_meas, ) # ╔═╡ aba74ee0-0163-4e15-8b49-d8dcad4839f7 md""" @@ -700,17 +750,17 @@ Optimize the priors ($\sigma_W$, $W_0$, $\mu$ and $D$). Do this with `MLE` metho """ # ╔═╡ 0eda4142-1aaf-4e17-bd78-857e13e94acd -# results_gom_mle = missing # Uncomment and complete the instruction +# results_gom_mle = missing results_gom_mle = optimize(growth_gom_inf, MLE(), NelderMead()) # ╔═╡ 50629194-98ed-4d45-86a2-95ac22daac29 md""" -Visualize a summary of the optimized parameters. +Visualize a summary of the optimized parameters. Beware: this can take a lot of time... """ # ╔═╡ 9c239fc6-275c-4d64-9fa2-6fd57295b757 -# missing # Uncomment and complete the instruction -results_gom_mle |> coeftable +# missing +# results_gom_mle |> coeftable # ╔═╡ dede17f2-655d-4871-b6de-5a32804947dd md""" @@ -718,16 +768,16 @@ Get the optimized values and assign them to `W0_opt_gom`, `μ_opt_gom` and `D_op """ # ╔═╡ e8c3f042-6058-4040-a67e-18563c04ee93 -# W₀_opt_gom = missing # Uncomment and complete the instruction +# W₀_opt_gom = missing W0_opt_gom = coef(results_gom_mle)[:W0] # ╔═╡ 665f4d03-3521-475a-a195-f861fd26bb69 -# μ_opt_gom = missing # Uncomment and complete the instruction +# μ_opt_gom = missing μ_opt_gom = coef(results_gom_mle)[:μ] # ╔═╡ 13775d58-ac61-431c-a6c9-447c1eec7942 -# D_opt_gom = missing # Uncomment and complete the instruction -D_opt_gom = coef(results_gom_mle)[:D] +# D_opt_gom = missing +d_opt_gom = coef(results_gom_mle)[:d] # ╔═╡ bc92f996-b626-4965-a286-d2c848eb1a21 md""" @@ -740,8 +790,8 @@ Set up initial condition with optimized initial condition: """ # ╔═╡ 23fd9fd9-a13c-4c56-a0b4-daec6f1d2cd8 -# u₀_opt_gom = missing # Uncomment and complete the instruction -u0_opt_gom = [:W => W0_opt_gom] +# u₀_opt_gom = missing +u0_opt_gom = [:W=>W0_opt_gom] # ╔═╡ 6f414d15-af0a-452a-98a1-dc0b7e54d617 md""" @@ -749,8 +799,8 @@ Set up parameter values with optimized parameter values: """ # ╔═╡ 57ee8a12-24df-4598-935c-f5e259b504cb -# params_opt_gom = missing # Uncomment and complete the instruction -params_opt_gom = [:μ => μ_opt_gom, :D => D_opt_gom] +# parms_opt_gom = missing +parms_opt_gom = [:μ=>μ_opt_gom, :d=>d_opt_gom] # ╔═╡ 48bc085c-9ce6-4752-a5a9-a814f803f571 md""" @@ -758,11 +808,11 @@ Create an ODEProblem and solve it. Use `Tsit5()` and `saveas=0.5`. """ # ╔═╡ 5b9b2e5b-9deb-4ff6-a923-b15b8b08f0c9 -# oprob_opt_gom = missing # Uncomment and complete the instruction -oprob_opt_gom = ODEProblem(growth_gom, u0_opt_gom, tspan, params_opt_gom) +# oprob_opt_gom = missing +oprob_opt_gom = ODEProblem(growth_gom, u0_opt_gom, tspan, parms_opt_gom) # ╔═╡ 66ee9655-a006-4c0f-b1f1-5576efa8f896 -# osol_opt_gom = missing # Uncomment and complete the instruction +# osol_opt_gom = missing osol_opt_gom = solve(oprob_opt_gom, Tsit5(), saveat=0.5) # ╔═╡ 52d7975a-5346-447f-9aad-4ecd11b6460a @@ -771,10 +821,9 @@ Finally, we plot $W$ simulated with the optimized initial value and parameter va """ # ╔═╡ e96efd6a-a666-4120-8480-9423e5d82ae1 -# Uncomment and complete the instruction # begin -# missing -# missing +# missing +# missing # end begin plot(osol_opt_gom, label="Gompertz growth", xlabel="t", @@ -791,14 +840,14 @@ Which grass growth model fits best these data? How can you prove this numericall md"- Answer: missing" # ╔═╡ Cell order: +# ╟─37da8786-fea0-4c2f-a76f-6e6c68325a78 # ╠═a09f814a-0c6a-11ef-0e79-a50b01287d63 -# ╠═7f521435-63ac-4178-a4aa-93d9c45fe820 # ╠═f8a92690-990b-4341-89e1-322adbcb8d1b # ╠═015050b3-3339-4b1a-ad7d-c358cce73675 +# ╠═a83f424f-1bbb-4e71-af81-302c0ce68907 # ╠═dbfe4800-0974-4ca1-bb0a-d8803409a98b # ╠═6e227e07-166a-41ce-839a-4c4c72addb23 # ╠═b992c080-a0ce-4188-b632-e734a141e67d -# ╟─37da8786-fea0-4c2f-a76f-6e6c68325a78 # ╟─4623369d-8c5a-422d-9e40-0f1dd7586260 # ╟─3cb0a166-ac53-4c3f-9832-e93742040cfb # ╟─987f0a4d-e416-4ceb-adbe-3dcdca9d0996 @@ -808,15 +857,6 @@ md"- Answer: missing" # ╟─85cd60a8-b448-4375-9b6d-399c4336c319 # ╠═5b320989-3e0b-447b-bc9a-25fb221ce609 # ╟─2481cd4f-0efc-4450-ab3d-4a5492597f36 -# ╟─9a5bc72b-346d-4e95-a873-783037ed98bc -# ╠═ba56adb1-9405-40d5-be48-4273b42ab145 -# ╠═6e3e53ea-4fe7-4a34-8b00-cbf8d63a3203 -# ╟─f4748167-b635-47a1-9015-32e1258c0afa -# ╠═a022b2ab-68a0-40ca-b914-7a2adcf4ae39 -# ╟─acccb2fa-12b2-4fc7-91e3-58a4b1a02892 -# ╠═e54ea8d1-0854-44fa-aed8-45d106e921e4 -# ╠═8d96eb17-ce19-4523-916f-3cd0441a16ca -# ╠═5c9db9df-0cbd-41ac-afe9-fb5616c967be # ╟─1aa44f2b-6f33-437f-b9dd-89762d9f28ea # ╟─b2b433ed-0266-4bea-a7e8-32adba542d4c # ╠═7c966a66-0091-4b81-9a7e-02ccd0d3db10 @@ -824,8 +864,28 @@ md"- Answer: missing" # ╠═877298e8-b61b-4c3a-ba2c-2827acdcfb50 # ╟─ef06cc43-510b-4ff9-b0b7-1c7fc267e9b1 # ╠═cb2bc6ee-4211-47e1-9956-5cf1b0c0671d +# ╟─50e6af5b-04f4-495c-853b-c746fd27254f +# ╟─a665740a-ea94-4452-8cbc-a1647319dfce +# ╟─b2f20a9a-0613-4bb4-a1d3-df1620aef105 +# ╟─6852f166-350e-4434-8304-382c01113ba1 +# ╟─4f78b2e8-9132-4175-b9ae-9f5979649e90 +# ╟─557e69c5-d0c6-41ee-b3df-851bd1d16923 +# ╟─237934b7-c89a-4ef6-938f-4629c0fa4c95 +# ╠═513ee94d-5d49-43d2-a1db-638fe9e2de1a +# ╠═8e163f77-b762-427a-b005-313284dc2e9e +# ╟─d9c38e9d-12b8-4e98-8546-bbc123960e6f +# ╠═39c70e22-86e0-467e-abab-24f99548b096 +# ╠═ddef987d-59db-4adf-ae77-d4d32a41ddec +# ╠═54fb706b-9036-42b5-833e-4081f4c2179c +# ╟─9a5bc72b-346d-4e95-a873-783037ed98bc +# ╟─4b17f7e6-26b0-425a-9f53-ccf1689639fe +# ╟─6736542c-5378-480e-a56b-65956b416225 +# ╠═ba56adb1-9405-40d5-be48-4273b42ab145 +# ╟─1c311b6f-6b18-4170-b500-33a8e4d3cb2d +# ╠═50220e1f-8e42-44da-be03-08c11df967f0 +# ╠═449286b0-0210-4c6b-b28f-fca87b52d674 +# ╠═f720eac9-cb29-4eef-ac94-471395941c0f # ╟─d75246d4-e03b-4684-be7d-4bcfb61ed7ef -# ╟─dd3a32f1-bdb6-44a3-acbe-f4269725c9e4 # ╠═8a9115eb-4044-4cab-a7db-39b5dd86c70d # ╟─48c9f616-d298-40da-b917-225abd39b3d9 # ╟─35f158c1-858d-4e4d-ac3d-bf4807dad9a0 @@ -872,6 +932,7 @@ md"- Answer: missing" # ╟─29170e2a-9916-438e-92ca-9f4783397b5e # ╟─7ff9fe52-156b-4a92-9058-781670de3abb # ╠═0c047043-3284-422a-9c88-2f4f4c170edf +# ╠═78a608a4-0ba7-400e-abb9-a9285c60681c # ╠═19c362cb-2764-41c9-a571-2e8e2bfcde93 # ╠═93db47b2-34e8-43b4-beac-b5620fd444e7 # ╠═68c71cd2-6cdc-4b80-b6e7-75bfea344295 @@ -890,12 +951,8 @@ md"- Answer: missing" # ╟─4aa71200-006b-4a15-ae75-67e36aa81522 # ╟─cdab3079-04b0-4a44-b770-468c20e321e4 # ╠═cf1a144e-09e9-42a3-b2a3-b8676a200a39 -# ╠═85c57cd3-bf00-437e-937b-f0c3b62f74ff -# ╟─c6d373f4-f13c-4135-823d-ee8fbeb71b56 -# ╠═a97abaa7-b642-4201-86f1-5c8995b07536 -# ╠═387730b4-bd06-492f-94e6-231bd68b3436 -# ╠═290a7fe8-3b1e-423f-8b30-9bd8903d2e8f -# ╟─febe2b67-2a8f-4575-946d-30877bd5f2d4 +# ╟─d69ed5dd-80cd-427e-9245-e42576a4688d +# ╠═8b429656-1c57-4278-9633-71311f400ed8 # ╟─b4300e8a-8052-419b-98c8-0508ebee2393 # ╠═2c6ae74c-2da4-4867-ad8a-f4e835101d63 # ╟─c6a5d453-d610-4f65-847c-c878dd41726c @@ -920,12 +977,7 @@ md"- Answer: missing" # ╠═1a9587aa-2356-48df-abcc-2ce874fa5d24 # ╟─785d500b-f8ea-446a-9952-2a5fd5d83d24 # ╟─e754826a-7411-4072-b0dc-a4bad7a15f98 -# ╠═bc1edcbe-46eb-4531-9c5f-dee8d5dc2ff9 -# ╟─47fb9e4c-df6a-4811-9980-99d595a34908 -# ╠═bbd150de-ff9a-4127-a0dd-2f9762f92b07 -# ╠═da5a0cbb-b033-46f1-a300-3954de138835 -# ╠═73da8f53-c3af-43b0-9b23-60471f1e3587 -# ╟─b0e67564-efe8-4fb2-bcf2-a711b770244e +# ╠═6e85c08e-9a97-4897-9c81-89517f55e254 # ╟─e5081280-d226-4834-8932-c89becd8313c # ╠═c739a908-2353-4e7a-8fbd-f640dc8cabe0 # ╟─1d0383ad-54d6-4ff2-8555-def83bfff0e6 @@ -949,4 +1001,4 @@ md"- Answer: missing" # ╟─52d7975a-5346-447f-9aad-4ecd11b6460a # ╠═e96efd6a-a666-4120-8480-9423e5d82ae1 # ╟─f0b4772d-a72b-44e0-a3a1-ba9ad4c4dfeb -# ╟─7e98a771-52bb-484e-82ab-2e42e7cb4053 +# ╠═7e98a771-52bb-484e-82ab-2e42e7cb4053 diff --git a/exercises/student_notebooks/P6_calib/calib_irrigation.jl b/exercises/student_notebooks/P6_calib/calib_irrigation.jl index f40af3d..da26217 100644 --- a/exercises/student_notebooks/P6_calib/calib_irrigation.jl +++ b/exercises/student_notebooks/P6_calib/calib_irrigation.jl @@ -5,32 +5,22 @@ using Markdown using InteractiveUtils # ╔═╡ 55675f3d-2fae-4a97-a0a0-ead29a6352e6 -begin - # add this cell if you want the notebook to use the environment from where the Pluto server is launched - using Pkg - Pkg.activate("..") -end +using Pkg; Pkg.activate("..") # ╔═╡ 2b010e5c-1121-11ef-16fe-a5e3317122e4 -using Markdown - -# ╔═╡ 6750d246-8e7a-4cfb-810e-d1100aa4fdef -using InteractiveUtils +using Markdown, InteractiveUtils # ╔═╡ 4947b0fd-13be-4f6a-b605-ed35b509d7ff -using Catalyst, OrdinaryDiffEq +using ModelingToolkit, OrdinaryDiffEq # ╔═╡ 61d14819-ba44-40fe-95a9-9d7b0bf3dc33 -using Turing +using ModelingToolkit: t_nounits as t, D_nounits as D # ╔═╡ f6e77c8d-de11-4b9d-93c6-45bdcfbbbf9b -using StatsPlots, StatsBase +using StatsPlots, StatsBase, Turing # ╔═╡ 9345dd8f-0a60-4aaf-a27f-ef8bf860f495 -using LinearAlgebra - -# ╔═╡ ea02aff2-7fb8-4b8f-8d0f-0bb3c6150708 -using Optim +using LinearAlgebra, Optim # ╔═╡ 55d5400d-1777-4918-a030-b94cb9a59f63 md" @@ -53,18 +43,20 @@ $$\begin{align} where $v = 10^{-3}\;h^{-1}\,mm^{-1}$ and $S_{1,res}=10 \;mm$. Previously, we also assumed $k = 3\;mm\,h^{-1}$ and $S_{max} = 150\;mm$. """ -# ╔═╡ ad42d3a7-6d83-4362-aa3f-31628a1db9b2 -md""" -The *reaction network object* for this model could be set up as: -""" +# ╔═╡ 9f6ad49c-cfbe-46e2-a2dd-c01bb67eeb61 +# @variables missing -# ╔═╡ dc26abff-f8ab-4881-9acf-7b325b386a16 -irrigation_mod = @reaction_network begin - k/Smax, S1 --> S2 - v, 2S2 --> 0 - r * (1 - S1res / Smax), 0 --> S1 - r/Smax, S1 --> 0 -end +# ╔═╡ fb2c7db4-e52f-45df-afa0-aeb1db78c849 +# @parameters missing + +# ╔═╡ c1773f17-825f-414f-9761-5744b34f1b71 +# change_S1 = missing + +# ╔═╡ 6e2165d4-ef72-4c37-bac0-2dfac417a192 +# change_S2 = missing + +# ╔═╡ 59654ad7-f42c-443f-bb34-6674db72606f +# @mtkbuild sys_irrigation = missing # ╔═╡ e5d7520d-fd8c-48c0-bd36-826766212217 md""" @@ -128,31 +120,16 @@ md""" Calibrate the parameter values for $k$ and $S_{max}$ using the aforementioned measurement data for $S_1$ and $S_2$ in a timespan of $[0, 150]\,h$. Take the values from above as initial values for $k$ and $S_{max}$. """ -# ╔═╡ d35bbe54-5ebc-4ea5-a6c8-a6419476ec4c -md""" -Create an `ODEProblem`. Use the aforementioned values as initial values for the problem. -""" - -# ╔═╡ 10057510-e4b6-4a3e-9d3f-f05effc88a58 -# u0 = missing # Uncomment and complete the instruction - -# ╔═╡ f4d49b1d-9105-4050-a9d4-196fa00a0591 -# tspan = missing # Uncomment and complete the instruction - -# ╔═╡ 777ce59f-c849-4a2e-a6dc-ae309d2a2e7c -# params = missing # Uncomment and complete the instruction - -# ╔═╡ 43b83336-aea6-4914-bc26-b2e84994ce57 -# oprob = missing +# ╔═╡ a65a0997-9945-436e-925b-8fc02055f61a +# missing # ╔═╡ 923d04ce-b4d2-44b0-afff-7062c4628ad0 md""" -Declare the Turing model. Make sure you take both experiments into account for optimizing $k$ and $S_{max}$. Use `InverseGamma` for the standard deviations of the measurements, `LogNormal` for $k$ and `Uniform` (between 100 and 200) for $Smax$. +Declare the Turing model. Make sure you take both experiments into account for optimizing $k$ and $S_{max}$. Based on literature, you can assume that the value of $S_{max}$ lies somewhere between 100 and 200 mm. """ # ╔═╡ 481eb8b9-5de2-4f68-b06a-ec18e054c9f5 -# Uncomment and complete the instruction -# @model function irrigation_fun(t_meas) +# @model function irrigation_inference(t_meas) # σ_S1 ~ missing # σ_S2 ~ missing # k ~ missing @@ -176,7 +153,7 @@ Provide the time measurements to the defined function and instantly condition th """ # ╔═╡ 0e2aa675-9e09-4e06-b5f8-118707ee652a -# irrigation_cond_mod = missing # Uncomment and complete the instruction +# irrigation_inf = missing # ╔═╡ f7f47956-7c3b-44cc-bff7-fb7d32af874a md""" @@ -184,7 +161,7 @@ Optimize the priors ($\sigma_{S1}$, $\sigma_{S2}$, $k$ and $S_{max}$). Do this w """ # ╔═╡ 8c254d5a-225b-4772-9fdd-e9f700495fbd -# results_mle = missing # Uncomment and complete the instruction +# results_mle = missing # ╔═╡ f15a1df5-047a-4f46-9419-8492ac1248e0 md""" @@ -192,7 +169,7 @@ Visualize a summary of the optimized parameters. """ # ╔═╡ 00d944e4-2c88-4a5d-b809-69f435df4684 -# missing # Uncomment and complete the instruction +# missing # ╔═╡ 89eb31ef-b24f-44c8-bbe5-19101d859937 md""" @@ -200,10 +177,10 @@ Get the optimized values and assign them to `k_opt` and `Smax_opt`. """ # ╔═╡ 92daa779-3373-40c0-b308-23e75e6674b6 -# k_opt = missing # Uncomment and complete the instruction +# k_opt = missing # ╔═╡ 35ab6ee5-fcd7-4dcc-9909-cc918fb1fe80 -# Smax_opt = missing # Uncomment and complete the instruction +# Smax_opt = missing # ╔═╡ 4026773f-ac5b-433e-bd9d-2122242861fd md""" @@ -216,7 +193,7 @@ Set up parameter values with optimized parameter values: """ # ╔═╡ 97d53e48-590a-485b-bcf3-edc6a6124faf -# params_opt = missing # Uncomment and complete the instruction +# params_opt = missing # ╔═╡ dfd2ac98-5cdc-4627-b6cf-71b33c0ff0d4 md""" @@ -224,16 +201,15 @@ Plot the simulation results $S_1$ and $S_2$ for the 1st experiment together with """ # ╔═╡ 95ace332-52c0-46c3-ae28-d038320ed2c8 -# u01 = missing # Uncomment and complete the instruction +# u01 = missing # ╔═╡ 6ae63a13-d5ae-4dfb-b88d-be295b11a472 -# oprob1_opt = missing # Uncomment and complete the instruction +# oprob1_opt = missing # ╔═╡ bc6505ca-a61d-467f-afe6-47792a510ad5 -# osol1_opt = missing # Uncomment and complete the instruction +# osol1_opt = missing # ╔═╡ 67e423ea-e941-45bf-af4f-3fdecb648fbc -# Uncomment and complete the instruction # begin # missing # missing @@ -246,16 +222,15 @@ Plot the simulation results $S_1$ and $S_2$ for the 1st experiment together with """ # ╔═╡ a7040b8e-c240-415b-8a9a-4a1a137398d4 -# u02 = missing # Uncomment and complete the instruction +# u02 = missing # ╔═╡ fe8f4961-68bd-42dc-a3f5-6692e918e241 -# oprob2_opt = missing # Uncomment and complete the instruction +# oprob2_opt = missing # ╔═╡ 7f280230-7846-4529-a2ff-a81a2b9480bf -# osol2_opt = missing # Uncomment and complete the instruction +# osol2_opt = missing # ╔═╡ ad9818a9-ccbe-4645-8b91-0c3fa773632a -# Uncomment and complete the instruction # begin # missing # missing @@ -273,17 +248,18 @@ md"- Answer: missing" # ╔═╡ Cell order: # ╠═2b010e5c-1121-11ef-16fe-a5e3317122e4 -# ╠═6750d246-8e7a-4cfb-810e-d1100aa4fdef # ╠═55675f3d-2fae-4a97-a0a0-ead29a6352e6 # ╠═4947b0fd-13be-4f6a-b605-ed35b509d7ff # ╠═61d14819-ba44-40fe-95a9-9d7b0bf3dc33 # ╠═f6e77c8d-de11-4b9d-93c6-45bdcfbbbf9b # ╠═9345dd8f-0a60-4aaf-a27f-ef8bf860f495 -# ╠═ea02aff2-7fb8-4b8f-8d0f-0bb3c6150708 # ╟─55d5400d-1777-4918-a030-b94cb9a59f63 # ╟─8f1afdec-b78d-4aba-a74f-cd3e4b35fab1 -# ╟─ad42d3a7-6d83-4362-aa3f-31628a1db9b2 -# ╠═dc26abff-f8ab-4881-9acf-7b325b386a16 +# ╠═9f6ad49c-cfbe-46e2-a2dd-c01bb67eeb61 +# ╠═fb2c7db4-e52f-45df-afa0-aeb1db78c849 +# ╠═c1773f17-825f-414f-9761-5744b34f1b71 +# ╠═6e2165d4-ef72-4c37-bac0-2dfac417a192 +# ╠═59654ad7-f42c-443f-bb34-6674db72606f # ╟─e5d7520d-fd8c-48c0-bd36-826766212217 # ╟─73c9b5fb-4f56-4bde-beb4-387651409c1b # ╠═9f94c63e-628f-4ff3-ad29-0f90d32dfcb1 @@ -297,11 +273,7 @@ md"- Answer: missing" # ╠═cef4b9a8-b5bf-4a2d-8a8f-5d8f85534859 # ╠═fc2cabd7-e778-4211-bf87-b5c11ca054c9 # ╟─c0b2db7b-0632-4008-9cff-d5fbf3e59807 -# ╟─d35bbe54-5ebc-4ea5-a6c8-a6419476ec4c -# ╠═10057510-e4b6-4a3e-9d3f-f05effc88a58 -# ╠═f4d49b1d-9105-4050-a9d4-196fa00a0591 -# ╠═777ce59f-c849-4a2e-a6dc-ae309d2a2e7c -# ╠═43b83336-aea6-4914-bc26-b2e84994ce57 +# ╠═a65a0997-9945-436e-925b-8fc02055f61a # ╟─923d04ce-b4d2-44b0-afff-7062c4628ad0 # ╠═481eb8b9-5de2-4f68-b06a-ec18e054c9f5 # ╟─df933ae8-1f51-4467-93a7-33f153e5e4f8