Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Merged by Bors] - Don't "compile" inputs of macros #222

Closed
wants to merge 12 commits into from

Conversation

torfjelde
Copy link
Member

At the moment we will actually call generate_mainbody! on inputs to macros inside the model, e.g. in a model @mymacro x ~ Normal() will actually result in code @mymacro $(generate_mainbody!(:(x ~ Normal()))) (or something, you get the idea).

IMO, this shouldn't be done for the following reasons:

  1. Breaks with what you'd expect in Julia, IMO, which is that a macro eats the "raw" code.
  2. Means that if we want to do stuff like @reparam from Reparameterizations #220 (and a bunch of others, see [Merged by Bors] - Small simplification of compiler #221 for a small list of possibilities), we need touch the compiler rather than just make a small macro that will perform transformations after the compiler has done it's job (referring to DynamicPPL compiler here).
  3. If the user wants to use a macro on some variables, but they want the actual variable rather than messing around with the sample-statement, they can just separate it into two lines, e.g. x ~ Normal(); @mymacro ....

Also, to be completely honest, for the longest time I've just assumed that I'm not even allowed to do @mymacro x ~ Normal() and have things work 😅 I bet a lot of people have the same impression by default (though this might of course just not be true:) )

@devmotion
Copy link
Member

devmotion commented Apr 1, 2021

I'm not sure if there's really a common/standard convention regarding the execution of nested macros in Julia. There are so many different ways to interpolate things, protect them from expansion, etc. and so many different use cases...

Maybe instead of just protecting macro calls (apart from __dot__) one could call macroexpand(__module__, ex; recursive=false) in such a case and expand it before the DynamicPPL compiler creates the model?

@devmotion
Copy link
Member

It seems in this case one does not even have to distinguish between __dot__ and other macros anymore, one would just always expand one layer of the macro?

@torfjelde
Copy link
Member Author

Maybe instead of just protecting macro calls (apart from dot) one could call macroexpand(module, ex; recursive=false) in such a case and expand it before the DynamicPPL compiler creates the model?

But isn't the "intuitive" thing that the model just doesn't touch it? I guess, why would we call macroexpand rather than just waiting with macro expansion until after the model is done (and have the model not touch the internal code)?

I'm just weary of "manually" touching the expansion of the macros I suppose. Feels like that is prone to error or misunderstanding for the user.

@devmotion
Copy link
Member

But isn't the "intuitive" thing that the model just doesn't touch it? I guess, why would we call macroexpand rather than just waiting with macro expansion until after the model is done (and have the model not touch the internal code)?

OK, maybe I misunderstood your initial post then 😄 I think expanding @mymacro before performing the DynamicPPL rewriting would actually be the more intuitive thing - that's e.g. also how @. works:

julia> macro mymacro(ex)
           return 42
       end

julia> @macroexpand @. @mymacro(x + y)
42

@devmotion
Copy link
Member

Ah, the example was bad, it is actually the other way around 😄

julia> macro mymacro(ex)
           @show ex
           return esc(:(x - y))
       end

julia> @macroexpand @. @mymacro(x + y)
ex = :((+).(x, y))
:(x - y)

@devmotion
Copy link
Member

But.... This example shows that the current behaviour (that applies generate_body! to the macro arguments) is actually in line with how @. behaves.

@torfjelde
Copy link
Member Author

Oh well that's my intuition out the window!

BUT @. doesn't actually "manually" expand the code though, like we would do. It just goes "Oh, I don't recongize this as something I can dot, so I'm just going to dot it's args instead" (https://github.com/JuliaLang/julia/blob/13caa8abf2320ea7e6f7255adf3477cbd6096e87/base/broadcast.jl#L1259-L1271)

@torfjelde
Copy link
Member Author

torfjelde commented Apr 1, 2021

Just clarify: I'm not against expanding the macro before the model does it's thing. It would actually be pretty dope since you can then generate expressions with ~ in it! 🎉 Making the @reparam macro for example would then be completely trivial:)

I just don't have a lot of experience with using macroexpand in this way so I'm in no position to evaluate whether it can cause issues 😅

Also: why do recursive=false? Curious.

@torfjelde
Copy link
Member Author

I can think of why it should cause issues though. The more I think about it, the more I like your suggestion of expanding first..

@devmotion
Copy link
Member

As I said above, IMO it really depends on the macro you use whether it is more intuitive to expand inner layers first or not. Since the @model macro rewrites such large blocks of the code, I think it could be reasonable to expand inner stuff first and let users generate x ~ dist blocks and other parts of the model instead of keeping it until the end (since then you would have to manually generate the observe/assume calls). And it seems one could still avoid expansion by protecting it with $?

recursive=false is maybe not useful, initially I was just worried that expansion might be too invasive.

@torfjelde
Copy link
Member Author

And it seems one could still avoid expansion by protecting it with $?

That's a good point. I like this. Currently trying out macroexpand 👍

@torfjelde
Copy link
Member Author

Made the change. Now e.g. the following works:

julia> using DynamicPPL, Distributions

julia> macro reparam(f, expr)
           L, R = DynamicPPL.getargs_tilde(expr)

           res = quote
               tmp ~ $R
               $L = $f(tmp)
           end
           return esc(res)
       end
@reparam (macro with 1 method)

julia> @model function demo()
           @reparam x -> 0 x ~ Normal()

           return x
       end
demo (generic function with 1 method)

julia> demo()()
0

julia> ex = @macroexpand @model function demo()
           @reparam x -> 0 x ~ Normal()

           return x
       end;

julia> ex |> Base.remove_linenums!
quote
    $(Expr(:meta, :doc))
    function demo(; )
        var"##evaluator#286" = ((_rng::Random.AbstractRNG, _model::Model, _varinfo::AbstractVarInfo, _sampler::AbstractMCMC.AbstractSampler, _context::DynamicPPL.AbstractContext)->begin
                    begin
                        begin
                            begin
                                var"##tmpright#282" = Normal()
                                var"##tmpright#282" isa Union{Distribution, AbstractVector{<:Distribution}} || throw(ArgumentError("Right-hand side of a ~ must be subtype of Distribution or a vector of Distributions."))
                                var"##vn#284" = tmp
                                var"##inds#285" = ()
                                tmp = (DynamicPPL.tilde_assume)(_rng, _context, _sampler, var"##tmpright#282", var"##vn#284", var"##inds#285", _varinfo)
                            end
                            x = ((x->begin
                                        0
                                    end))(tmp)
                        end
                        return x
                    end
                end)
        return (Model)(:demo, var"##evaluator#286", NamedTuple(), NamedTuple())
    end
end

(Of course it's a stupid example, but you get the idea.)

@torfjelde
Copy link
Member Author

Tests also pass locally 👍

@torfjelde
Copy link
Member Author

What do you think about the updated version @devmotion ?

Copy link
Member

@devmotion devmotion left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, I like this change. Can you add some tests?

src/compiler.jl Outdated
@@ -64,15 +64,15 @@ To generate a `Model`, call `model(xvalue)` or `model(xvalue, yvalue)`.
macro model(expr, warn=true)
# include `LineNumberNode` with information about the call site in the
# generated function for easier debugging and interpretation of error messages
esc(model(expr, __source__, warn))
esc(model(expr, __source__, warn, __module__))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it would make sense to be a bit more consistent and group the internal variables together, i.e., make them e.g. the first two arguments? E.g.

Suggested change
esc(model(expr, __source__, warn, __module__))
esc(model(__module__, __source__, expr, warn))

(as __module__ is also the first argument of macroexpand)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Haha, I literally made the change of grouping them together (but didn't move them to the first argument, but I agree with this too).

But opinions on what to do with generate_mainbody!, etc.? Personally I don't like switching ordering of arguments, e.g .__module__ being before warn in model but then warn comes before __module__ in generate_mainbody!. Should I make mod the first argument to generate_mainbody! too maybe?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ha I thought about this as well. I didn't suggest it because one would have to change quite many lines. But I agree that it would be more consistent if mod would be the first argument there as well 👍

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Haha, but sweet I'll do that then 👍

src/compiler.jl Outdated
end

function model(expr, linenumbernode, warn)
function model(expr, linenumbernode, warn, mod)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the suggestion above this would be changed to

Suggested change
function model(expr, linenumbernode, warn, mod)
function model(mod, linenumbernode, expr, warn)

@torfjelde
Copy link
Member Author

Going out for a bit, but I'll get back to this later (and the other PRs) 👍

@devmotion
Copy link
Member

I guess we should also add some tests for the macro expansion and for interpolated macros (just to be sure that it works). E.g. something like

macro mymodel()
    return esc(:(x ~ Normal()))
end

@model function demo()
    @mymodel()
end

@test haskey(VarInfo(demo()), @varname(x))

and

macro mymodel()
    return esc(:(return 42))
end

@model function demo()
    x ~ Normal()
    $(@mymodel())
end

@test demo()() == 42

@torfjelde
Copy link
Member Author

Sorry, hadn't gotten around to adding tests yet; but added the ones you suggested just now:)

@torfjelde
Copy link
Member Author

torfjelde commented Apr 3, 2021

Also, I must have been particular sleepy when I made this PR: I just noticed that I have named the branc tor/DONT-expand-macro-inputs, which is exactly the opposite of what I was trying to achieve 😅

@torfjelde
Copy link
Member Author

Ah crap I see I forgot to move the mod argument as we discussed. It think I actually idd, but realized I was on the wrong branch and stashed so that I could apply to this one later.
Omw to bed so I'll get that done tomorrow 👍

test/compiler.jl Outdated
Comment on lines 286 to 293
macro mymodel()
return esc(:(return 42))
end

@model function demo()
x ~ Normal()
$(@mymodel())
end
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah this was a bad idea since it should give the same result, regardless if the macro is expanded or not. Maybe the following captures better what we want to test:

Suggested change
macro mymodel()
return esc(:(return 42))
end
@model function demo()
x ~ Normal()
$(@mymodel())
end
struct MyModelStruct{T}
x::T
end
Base.:~(x, y::MyModelStruct) = y.x
macro mymodel(ex)
# check if expression was modified by the DynamicPPL "compiler"
if ex == :(y ~ Uniform())
return :(4 ~ MyModelStruct(42))
else
return :(return -1)
end
end
@model function demo()
$(@mymodel(y ~ Uniform()))
end

Then the test will fail if

  • the DynamicPPL compiler expands y ~ Uniform() before expanding the macro (returns -1)
  • @mymodel is expanded before the DynamicPPL compiler has transformed the whole @model expression (error since MyModelStruct is not a distribution, and hence the tilde_observe errors)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe the struct (and/or the macros) have to be defined at the top level, I am not sure...

Co-authored-by: David Widmann <[email protected]>
Copy link
Member

@devmotion devmotion left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, just a minor thing (whitespace instead of tabs). It's a breaking change since the model syntax changes but I think the current behaviour is a bug and therefore the non-breaking version bump is fine 👍

test/compiler.jl Outdated
Comment on lines 278 to 281
return esc(:(x ~ Normal()))
else
return esc(:(z ~ Exponential()))
end
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

White space instead of tabs:

Suggested change
return esc(:(x ~ Normal()))
else
return esc(:(z ~ Exponential()))
end
return esc(:(x ~ Normal()))
else
return esc(:(z ~ Exponential()))
end

@devmotion
Copy link
Member

Ah seems I approved too quickly - it would be good to improve the test of protected macros as well (maybe along the lines of https://github.com/TuringLang/DynamicPPL.jl/pull/222/files#r606757078).

@torfjelde
Copy link
Member Author

Ehh not certain what you're linking there 😕 It shows me src/compiler.jl

test/compiler.jl Outdated
Comment on lines 278 to 281
return esc(:(x ~ Normal()))
else
return esc(:(z ~ Exponential()))
end
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Somehow it's still not correct?

Suggested change
return esc(:(x ~ Normal()))
else
return esc(:(z ~ Exponential()))
end
return esc(:(x ~ Normal()))
else
return esc(:(z ~ Exponential()))
end

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be good now 👍

@devmotion
Copy link
Member

Ehh not certain what you're linking there

Ah, sorry, probably I copied the wrong link. It seems you updated the second test as well, so all good 👍

Copy link
Member

@devmotion devmotion left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@torfjelde
Copy link
Member Author

bors r+

bors bot pushed a commit that referenced this pull request Apr 4, 2021
At the moment we will actually call `generate_mainbody!` on inputs to macros inside the model, e.g. in a model `@mymacro x ~ Normal()` will actually result in code `@mymacro $(generate_mainbody!(:(x ~ Normal())))` (or something, you get the idea). 

IMO, this shouldn't be done for the following reasons:
1. Breaks with what you'd expect in Julia, IMO, which is that a macro eats the "raw" code.
2. Means that if we want to do stuff like `@reparam` from #220  (and a bunch of others, see #221 for a small list of possibilities), we need touch the compiler rather than just make a small macro that will perform transformations *after* the compiler has done it's job (referring to DynamicPPL compiler here). 
3. If the user wants to use a macro on some variables, but they want the actual variable rather than messing around with the sample-statement, they can just separate it into two lines, e.g. `x ~ Normal(); @mymacro ...`. 

Also, to be completely honest, for the longest time I've just assumed that I'm not even allowed to do `@mymacro x ~ Normal()` and have things work 😅 I bet a lot of people have the same impression by default (though this might of course just not be true:) )
@bors
Copy link
Contributor

bors bot commented Apr 4, 2021

Build failed:

@torfjelde
Copy link
Member Author

bors r+

bors bot pushed a commit that referenced this pull request Apr 4, 2021
At the moment we will actually call `generate_mainbody!` on inputs to macros inside the model, e.g. in a model `@mymacro x ~ Normal()` will actually result in code `@mymacro $(generate_mainbody!(:(x ~ Normal())))` (or something, you get the idea). 

IMO, this shouldn't be done for the following reasons:
1. Breaks with what you'd expect in Julia, IMO, which is that a macro eats the "raw" code.
2. Means that if we want to do stuff like `@reparam` from #220  (and a bunch of others, see #221 for a small list of possibilities), we need touch the compiler rather than just make a small macro that will perform transformations *after* the compiler has done it's job (referring to DynamicPPL compiler here). 
3. If the user wants to use a macro on some variables, but they want the actual variable rather than messing around with the sample-statement, they can just separate it into two lines, e.g. `x ~ Normal(); @mymacro ...`. 

Also, to be completely honest, for the longest time I've just assumed that I'm not even allowed to do `@mymacro x ~ Normal()` and have things work 😅 I bet a lot of people have the same impression by default (though this might of course just not be true:) )
@bors bors bot changed the title Don't "compile" inputs of macros [Merged by Bors] - Don't "compile" inputs of macros Apr 4, 2021
@bors bors bot closed this Apr 4, 2021
@bors bors bot deleted the tor/dont-expand-macro-inputs branch April 4, 2021 10:45
bors bot pushed a commit that referenced this pull request May 18, 2021
Part of the motivations for #221 and #222 was so we could add submodels/model-nesting.

Well, now we can. 

Special thanks to @devmotion who reviewed those PRs (several times), improving them significantly, made additional PRs and suggested the current impl of the `@submodel` ❤️ 

EDIT: We fixed the performance:) This now has zero runtime overhead! See comment-section.
EDIT 2: Thanks to @devmotion, we can now alos deal with dynamically specified prefices!

- [Motivating example: AR1-prior](#org46a90a5)
- [Demos](#org7e05701)
- [Can it ever fail?](#org75acb71)
- [Benchmarks](#orga99bcf4)


<a id="org46a90a5"></a>

# Motivating example: AR1-prior

```julia
using Turing
using DynamicPPL
```

```julia
# Could have made model which samples `num_obs` AR1 samples simulatenously,
# but for the sake of showing off dynamic prefixes, we'll only use a vector-implementation.
# The matrix implementation will be quite a bit faster too, but oh well.
@model function AR1(num_steps, α, μ, σ, ::Type{TV} = Vector{Float64}) where {TV}
    η ~ MvNormal(num_steps, 1.0)
    δ = sqrt(1 - α^2)

    x = TV(undef, num_steps)
    x[1] = η[1]
    @inbounds for t = 2:num_steps
        x[t] = @. α * x[t - 1] + δ * η[t]
    end

    return @. μ + σ * x
end

# Generate an observation
σ_obs = 0.1
num_obs = 5
num_steps = 10

ar1 = AR1(num_steps, 0.5, 1.0, 1.0)
ys = mapreduce(hcat, 1:num_obs) do i
    ar1() + σ_obs * randn(num_steps)
end
```

    10×5 Matrix{Float64}:
      2.30189    0.301618  1.73268   -0.65096    1.46835
      2.11187   -1.34878   2.3728     1.02125    3.28422
     -0.249064   0.769488  1.34044    3.22175    2.52196
     -0.25863   -0.216914  0.528954   3.04756    3.8234
      0.372122   0.473511  0.708068   0.76197    0.202003
      0.41487    0.759435  1.80162    0.790204   0.12331
      1.32585    0.567929  2.74316    1.0874     2.82701
      1.84307    1.16138   1.36382    0.735388   1.07423
      3.20139    0.75177   1.57236    0.865401  -0.315341
      1.22479    1.35688   2.8239     0.597959   0.587955

```julia
@model function demo(y)
    α ~ Uniform()
    μ ~ Normal()
    σ ~ truncated(Normal(), 0, Inf)

    num_steps = size(y, 1)
    num_obs = size(y, 2)
    @inbounds for i = 1:num_obs
        x = @SubModel $(Symbol("ar1_$i")) AR1(num_steps, α, μ, σ)
        y[:, i] ~ MvNormal(x, 0.1)
    end
end;

m = demo(y);
vi = VarInfo(m);
```

```julia
keys(vi)
```

    8-element Vector{VarName{sym, Tuple{}} where sym}:
     α
     μ
     σ
     ar1_1.η
     ar1_2.η
     ar1_3.η
     ar1_4.η
     ar1_5.η

```julia
vi[@varname α]
```

    0.9383208224122919

```julia
chain = sample(m, NUTS(1_000, 0.8), 3_000);
```

    ┌ Info: Found initial step size
    │   ϵ = 0.025
    └ @ Turing.Inference /home/tor/.julia/packages/Turing/rHLGJ/src/inference/hmc.jl:188
    Sampling: 100%|█████████████████████████████████████████| Time: 0:04:00

```julia
chain[1001:end, [:α, :μ, :σ], :]
```

    Chains MCMC chain (2000×3×1 Array{Float64, 3}):
    
    Iterations        = 1001:3000
    Thinning interval = 1
    Chains            = 1
    Samples per chain = 2000
    parameters        = α, μ, σ
    internals         = 
    
    Summary Statistics
      parameters      mean       std   naive_se      mcse        ess      rhat 
          Symbol   Float64   Float64    Float64   Float64    Float64   Float64 
    
               α    0.5474    0.1334     0.0030    0.0073   159.6969    0.9995
               μ    1.0039    0.2733     0.0061    0.0168   169.9106    1.0134
               σ    1.1294    0.1807     0.0040    0.0106   166.8670    0.9998
    
    Quantiles
      parameters      2.5%     25.0%     50.0%     75.0%     97.5% 
          Symbol   Float64   Float64   Float64   Float64   Float64 
    
               α    0.2684    0.4625    0.5534    0.6445    0.7861
               μ    0.4248    0.8227    1.0241    1.2011    1.4801
               σ    0.8781    1.0018    1.0989    1.2239    1.5472

Yay! We recovered the true parameters :tada:

```julia
@benchmark $m($vi)
```

    BenchmarkTools.Trial: 
      memory estimate:  12.05 KiB
      allocs estimate:  123
      --------------
      minimum time:     15.091 μs (0.00% GC)
      median time:      17.861 μs (0.00% GC)
      mean time:        19.582 μs (5.23% GC)
      maximum time:     10.293 ms (99.46% GC)
      --------------
      samples:          10000
      evals/sample:     1


<a id="org7e05701"></a>

# Demos

```julia
using DynamicPPL, Distributions
```

    ┌ Info: Precompiling DynamicPPL [366bfd00-2699-11ea-058f-f148b4cae6d8]
    └ @ Base loading.jl:1317

```julia
@model function demo1(x)
    x ~ Normal()
end;
@model function demo2(x, y)
    @SubModel demo1(x)
    y ~ Uniform()
end false;
m2 = demo2(missing, missing);
vi2 = VarInfo(m2);
keys(vi2)
```

    2-element Vector{VarName{sym, Tuple{}} where sym}:
     x
     y

```julia
println(vi2[VarName(Symbol("x"))])
println(vi2[VarName(Symbol("y"))])
```

    0.3069117531180063
    0.7325324947386318

We can also `observe` without issues:

```julia
@model function demo2(x, y)
    @SubModel demo1(x)
    y ~ Normal(x)
end false;
m2 = demo2(1000.0, missing);
vi2 = VarInfo(m2);
keys(vi2)
```

    1-element Vector{VarName{:y, Tuple{}}}:
     y

```julia
vi2[@varname y]
```

    1000.3905079427211

```julia
DynamicPPL.getlogp(vi2)
```

    -500001.9141252931

But what if the models have the same variable-names?!

"Sure, this is cool and all, but can we even use the values from the nested values in the parent model?"

```julia
@model function demo_return(x)
    x ~ Normal()
    return x
end;

@model function demo_useval(x, y)
    x1 = @SubModel sub1 demo_return(x)
    x2 = @SubModel sub2 demo_return(y)

    z ~ Normal(x1 + x2 + 100, 1.0)
end false;
vi = VarInfo(demo_useval(missing, missing));
keys(vi)
```

    3-element Vector{VarName{sym, Tuple{}} where sym}:
     sub1.x
     sub2.x
     z

```julia
vi[@varname z]
```

    101.09066854862154

And just to prove a point:

```julia
@model function nested(x, y)
    @SubModel 1 nested1(x, y)
    y ~ Uniform()
end false;
@model function nested1(x, y)
    @SubModel 2 nested2(x, y)
    y ~ Uniform()
end false;
@model function nested2(x, y)
    z = @SubModel 3 nested3(x, y)
    y ~ Normal(z, 1.0)
end false;
@model function nested3(x, y)
    x ~ Uniform()
    y ~ Normal(-100.0, 1.0)

    return x + 1000
end false;

m = nested(missing, missing);
vi = VarInfo(m);
keys(vi)
```

    5-element Vector{VarName{sym, Tuple{}} where sym}:
     1.2.3.x
     1.2.3.y
     1.2.y
     1.y
     y

```julia
vi[VarName(Symbol("1.2.y"))]
```

    1000.5609156083766

```julia
DynamicPPL.getlogp(vi)
```

    -4.620040828101227


<a id="org75acb71"></a>

# Can it ever fail?

Yeah, if the user doesn't provide the prefix, it can:

```julia
@model function nested(x, y)
    @SubModel nested1(x, y)
    y ~ Uniform()
end false;
@model function nested1(x, y)
    @SubModel nested2(x, y)
    y ~ Uniform()
end false;
@model function nested2(x, y)
    z = @SubModel nested3(x, y)
    y ~ Normal(z, 1.0)
end false;
@model function nested3(x, y)
    x ~ Uniform()
    y ~ Normal(-100.0, 1.0)

    return x + 1000
end false;

m = nested(missing, missing);
vi = VarInfo(m);
keys(vi)
```

    2-element Vector{VarName{sym, Tuple{}} where sym}:
     x
     y

```julia
# Inner-most value is recorded (i.e. the first one reached)
vi[@varname y]
```

    -100.16836599596732

And it messes up the logp computation:

```julia
DynamicPPL.getlogp(vi)
```

    -Inf

But I could imagine there's a way for us to fix this, or at least warn the user when this happens.


<a id="orga99bcf4"></a>

# Benchmarks

At this point you're probably wondering, "but does it have any overhead (at runtime)?". For a "shallow" nestings, nah, but if you go deep enough there seems to be a tiny bit (likely because we're calling the "constructor" for the model):

```julia
using BenchmarkTools

@model function base(x, y)
    x ~ Uniform()
    y ~ Uniform()
    y1 ~ Uniform()
    z = x + 1000
    y12 ~ Normal()
    y123 ~ Normal(-100.0, 1.0)
end

m1 = base(missing, missing);
vi1 = VarInfo(m1);
```

```julia
@model function nested_shallow(x, y)
    @SubModel 1 nested1_shallow(x, y)
    y ~ Uniform()
end false;
@model function nested1_shallow(x, y)
    x ~ Uniform()
    y ~ Uniform()
    z = x + 1000
    y12 ~ Normal()
    y123 ~ Normal(-100.0, 1.0)
end false;

m2 = nested_shallow(missing, missing);
vi2 = VarInfo(m2);
```

```julia
@model function nested(x, y)
    @SubModel 1 nested1(x, y)
    y ~ Uniform()
end false;
@model function nested1(x, y)
    @SubModel 2 nested2(x, y)
    y ~ Uniform()
end false;
@model function nested2(x, y)
    z = @SubModel 3 nested3(x, y)
    y ~ Normal(z, 1.0)
end false;
@model function nested3(x, y)
    x ~ Uniform()
    y ~ Normal(-100.0, 1.0)

    return x + 1000
end

m3 = nested(missing, missing);
vi3 = VarInfo(m3);
```

```julia
@model function nested_noprefix(x, y)
    @SubModel nested_noprefix1(x, y)
    y ~ Uniform()
end false;
@model function nested_noprefix1(x, y)
    @SubModel nested_noprefix2(x, y)
    y1 ~ Uniform()
end false;
@model function nested_noprefix2(x, y)
    z = @SubModel nested_noprefix3(x, y)
    y2 ~ Normal(z, 1.0)
end false;
@model function nested_noprefix3(x, y)
    x ~ Uniform()
    y3 ~ Normal(-100.0, 1.0)

    return x + 1000
end

m4 = nested_noprefix(missing, missing);
vi4 = VarInfo(m4);
```

```julia
keys(vi1)
```

    5-element Vector{VarName{sym, Tuple{}} where sym}:
     x
     y
     y1
     y12
     y123

```julia
keys(vi2)
```

    5-element Vector{VarName{sym, Tuple{}} where sym}:
     1.x
     1.y
     1.y12
     1.y123
     y

```julia
keys(vi3)
```

    5-element Vector{VarName{sym, Tuple{}} where sym}:
     1.2.3.x
     1.2.3.y
     1.2.y
     1.y
     y

```julia
keys(vi4)
```

    5-element Vector{VarName{sym, Tuple{}} where sym}:
     x
     y3
     y2
     y1
     y

```julia
@benchmark $m1($vi1)
```

    BenchmarkTools.Trial: 
      memory estimate:  160 bytes
      allocs estimate:  5
      --------------
      minimum time:     1.714 μs (0.00% GC)
      median time:      1.747 μs (0.00% GC)
      mean time:        1.835 μs (0.00% GC)
      maximum time:     6.894 μs (0.00% GC)
      --------------
      samples:          10000
      evals/sample:     10

```julia
@benchmark $m2($vi2)
```

    BenchmarkTools.Trial: 
      memory estimate:  160 bytes
      allocs estimate:  5
      --------------
      minimum time:     1.759 μs (0.00% GC)
      median time:      1.778 μs (0.00% GC)
      mean time:        1.819 μs (0.00% GC)
      maximum time:     5.563 μs (0.00% GC)
      --------------
      samples:          10000
      evals/sample:     10

```julia
@benchmark $m3($vi3)
```

    BenchmarkTools.Trial: 
      memory estimate:  160 bytes
      allocs estimate:  5
      --------------
      minimum time:     1.718 μs (0.00% GC)
      median time:      1.746 μs (0.00% GC)
      mean time:        1.787 μs (0.00% GC)
      maximum time:     5.758 μs (0.00% GC)
      --------------
      samples:          10000
      evals/sample:     10

```julia
@benchmark $m4($vi4)
```

    BenchmarkTools.Trial: 
      memory estimate:  160 bytes
      allocs estimate:  5
      --------------
      minimum time:     1.672 μs (0.00% GC)
      median time:      1.696 μs (0.00% GC)
      mean time:        1.756 μs (0.00% GC)
      maximum time:     4.882 μs (0.00% GC)
      --------------
      samples:          10000
      evals/sample:     10

Notice that the number of allocations have increased for the deeply nested model. Seems like the Julia compiler isn't too good at inferring the return-types of Turing-models? This seems to be the case too by looking at the lowered code. I haven't given this too much thought yet btw; likely is a way for us to help the compiler here.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants