Looks like this is an effect of JuliaDiff/ForwardDiff.jl#791:
Error During Test at /Users/runner/work/Turing.jl/Turing.jl/test/optimisation/Optimisation.jl:554
Got exception outside of a @test
DomainError with Dual{ForwardDiff.Tag{Turing.Optimisation.OptimLogDensity{DynamicPPL.LogDensityFunction{true, DynamicPPL.Model{typeof(DynamicPPL.TestUtils.demo_dot_assume_observe), (:x, Symbol("##arg#339")), (), (), Tuple{Vector{Float64}, DynamicPPL.TypeWrap{Vector{Float64}}}, Tuple{}, DynamicPPL.DefaultContext, false}, Nothing, typeof(DynamicPPL.getloglikelihood), @NamedTuple{m::DynamicPPL.RangeAndLinked}, Nothing, Vector{Float64}}}, Float64}}(0.0,NaN,NaN,NaN,NaN):
Normal: the condition σ >= zero(σ) is not satisfied.
Stacktrace:
[1] #405
with NaN gradients though this optimisation was never going to succeed anyway, so the issue is almost certainly something to do with Optimization rather than ForwardDiff. I haven't looked into this enough to know exactly where the issue lies, is it that it shouldn't be generating NaN gradients (which only happens with OptimizationNLopt.NLopt.LD_TNEWTON_PRECOND_RESTART() and not lbfgs or Nelder–Mead), or is it something else, like maybe we should really be linking the model all the time
(Edit: It's quite possibly caused by not using NaN-safe mode with ForwardDiff.)
Looks like this is an effect of JuliaDiff/ForwardDiff.jl#791:
with NaN gradients though this optimisation was never going to succeed anyway, so the issue is almost certainly something to do with Optimization rather than ForwardDiff. I haven't looked into this enough to know exactly where the issue lies, is it that it shouldn't be generating NaN gradients (which only happens with
OptimizationNLopt.NLopt.LD_TNEWTON_PRECOND_RESTART()and not lbfgs or Nelder–Mead), or is it something else, like maybe we should really be linking the model all the time(Edit: It's quite possibly caused by not using NaN-safe mode with ForwardDiff.)