-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Merged by Bors] - Don't "compile" inputs of macros #222
Conversation
I'm not sure if there's really a common/standard convention regarding the execution of nested macros in Julia. There are so many different ways to interpolate things, protect them from expansion, etc. and so many different use cases... Maybe instead of just protecting macro calls (apart from |
It seems in this case one does not even have to distinguish between |
But isn't the "intuitive" thing that the model just doesn't touch it? I guess, why would we call I'm just weary of "manually" touching the expansion of the macros I suppose. Feels like that is prone to error or misunderstanding for the user. |
OK, maybe I misunderstood your initial post then 😄 I think expanding julia> macro mymacro(ex)
return 42
end
julia> @macroexpand @. @mymacro(x + y)
42 |
Ah, the example was bad, it is actually the other way around 😄 julia> macro mymacro(ex)
@show ex
return esc(:(x - y))
end
julia> @macroexpand @. @mymacro(x + y)
ex = :((+).(x, y))
:(x - y) |
But.... This example shows that the current behaviour (that applies |
Oh well that's my intuition out the window! BUT |
Just clarify: I'm not against expanding the macro before the model does it's thing. It would actually be pretty dope since you can then generate expressions with I just don't have a lot of experience with using Also: why do |
I can think of why it should cause issues though. The more I think about it, the more I like your suggestion of expanding first.. |
As I said above, IMO it really depends on the macro you use whether it is more intuitive to expand inner layers first or not. Since the
|
That's a good point. I like this. Currently trying out |
Made the change. Now e.g. the following works: julia> using DynamicPPL, Distributions
julia> macro reparam(f, expr)
L, R = DynamicPPL.getargs_tilde(expr)
res = quote
tmp ~ $R
$L = $f(tmp)
end
return esc(res)
end
@reparam (macro with 1 method)
julia> @model function demo()
@reparam x -> 0 x ~ Normal()
return x
end
demo (generic function with 1 method)
julia> demo()()
0
julia> ex = @macroexpand @model function demo()
@reparam x -> 0 x ~ Normal()
return x
end;
julia> ex |> Base.remove_linenums!
quote
$(Expr(:meta, :doc))
function demo(; )
var"##evaluator#286" = ((_rng::Random.AbstractRNG, _model::Model, _varinfo::AbstractVarInfo, _sampler::AbstractMCMC.AbstractSampler, _context::DynamicPPL.AbstractContext)->begin
begin
begin
begin
var"##tmpright#282" = Normal()
var"##tmpright#282" isa Union{Distribution, AbstractVector{<:Distribution}} || throw(ArgumentError("Right-hand side of a ~ must be subtype of Distribution or a vector of Distributions."))
var"##vn#284" = tmp
var"##inds#285" = ()
tmp = (DynamicPPL.tilde_assume)(_rng, _context, _sampler, var"##tmpright#282", var"##vn#284", var"##inds#285", _varinfo)
end
x = ((x->begin
0
end))(tmp)
end
return x
end
end)
return (Model)(:demo, var"##evaluator#286", NamedTuple(), NamedTuple())
end
end (Of course it's a stupid example, but you get the idea.) |
Tests also pass locally 👍 |
What do you think about the updated version @devmotion ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, I like this change. Can you add some tests?
src/compiler.jl
Outdated
@@ -64,15 +64,15 @@ To generate a `Model`, call `model(xvalue)` or `model(xvalue, yvalue)`. | |||
macro model(expr, warn=true) | |||
# include `LineNumberNode` with information about the call site in the | |||
# generated function for easier debugging and interpretation of error messages | |||
esc(model(expr, __source__, warn)) | |||
esc(model(expr, __source__, warn, __module__)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe it would make sense to be a bit more consistent and group the internal variables together, i.e., make them e.g. the first two arguments? E.g.
esc(model(expr, __source__, warn, __module__)) | |
esc(model(__module__, __source__, expr, warn)) |
(as __module__
is also the first argument of macroexpand
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Haha, I literally made the change of grouping them together (but didn't move them to the first argument, but I agree with this too).
But opinions on what to do with generate_mainbody!
, etc.? Personally I don't like switching ordering of arguments, e.g .__module__
being before warn
in model
but then warn
comes before __module__
in generate_mainbody!
. Should I make mod
the first argument to generate_mainbody!
too maybe?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ha I thought about this as well. I didn't suggest it because one would have to change quite many lines. But I agree that it would be more consistent if mod
would be the first argument there as well 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Haha, but sweet I'll do that then 👍
src/compiler.jl
Outdated
end | ||
|
||
function model(expr, linenumbernode, warn) | ||
function model(expr, linenumbernode, warn, mod) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the suggestion above this would be changed to
function model(expr, linenumbernode, warn, mod) | |
function model(mod, linenumbernode, expr, warn) |
Going out for a bit, but I'll get back to this later (and the other PRs) 👍 |
I guess we should also add some tests for the macro expansion and for interpolated macros (just to be sure that it works). E.g. something like macro mymodel()
return esc(:(x ~ Normal()))
end
@model function demo()
@mymodel()
end
@test haskey(VarInfo(demo()), @varname(x)) and macro mymodel()
return esc(:(return 42))
end
@model function demo()
x ~ Normal()
$(@mymodel())
end
@test demo()() == 42 |
Sorry, hadn't gotten around to adding tests yet; but added the ones you suggested just now:) |
Also, I must have been particular sleepy when I made this PR: I just noticed that I have named the branc |
Ah crap I see I forgot to move the |
test/compiler.jl
Outdated
macro mymodel() | ||
return esc(:(return 42)) | ||
end | ||
|
||
@model function demo() | ||
x ~ Normal() | ||
$(@mymodel()) | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah this was a bad idea since it should give the same result, regardless if the macro is expanded or not. Maybe the following captures better what we want to test:
macro mymodel() | |
return esc(:(return 42)) | |
end | |
@model function demo() | |
x ~ Normal() | |
$(@mymodel()) | |
end | |
struct MyModelStruct{T} | |
x::T | |
end | |
Base.:~(x, y::MyModelStruct) = y.x | |
macro mymodel(ex) | |
# check if expression was modified by the DynamicPPL "compiler" | |
if ex == :(y ~ Uniform()) | |
return :(4 ~ MyModelStruct(42)) | |
else | |
return :(return -1) | |
end | |
end | |
@model function demo() | |
$(@mymodel(y ~ Uniform())) | |
end |
Then the test will fail if
- the DynamicPPL compiler expands
y ~ Uniform()
before expanding the macro (returns -1) @mymodel
is expanded before the DynamicPPL compiler has transformed the whole@model
expression (error sinceMyModelStruct
is not a distribution, and hence thetilde_observe
errors)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe the struct (and/or the macros) have to be defined at the top level, I am not sure...
Co-authored-by: David Widmann <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, just a minor thing (whitespace instead of tabs). It's a breaking change since the model syntax changes but I think the current behaviour is a bug and therefore the non-breaking version bump is fine 👍
test/compiler.jl
Outdated
return esc(:(x ~ Normal())) | ||
else | ||
return esc(:(z ~ Exponential())) | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
White space instead of tabs:
return esc(:(x ~ Normal())) | |
else | |
return esc(:(z ~ Exponential())) | |
end | |
return esc(:(x ~ Normal())) | |
else | |
return esc(:(z ~ Exponential())) | |
end |
Ah seems I approved too quickly - it would be good to improve the test of protected macros as well (maybe along the lines of https://github.com/TuringLang/DynamicPPL.jl/pull/222/files#r606757078). |
Ehh not certain what you're linking there 😕 It shows me |
test/compiler.jl
Outdated
return esc(:(x ~ Normal())) | ||
else | ||
return esc(:(z ~ Exponential())) | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Somehow it's still not correct?
return esc(:(x ~ Normal())) | |
else | |
return esc(:(z ~ Exponential())) | |
end | |
return esc(:(x ~ Normal())) | |
else | |
return esc(:(z ~ Exponential())) | |
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be good now 👍
Ah, sorry, probably I copied the wrong link. It seems you updated the second test as well, so all good 👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
bors r+ |
At the moment we will actually call `generate_mainbody!` on inputs to macros inside the model, e.g. in a model `@mymacro x ~ Normal()` will actually result in code `@mymacro $(generate_mainbody!(:(x ~ Normal())))` (or something, you get the idea). IMO, this shouldn't be done for the following reasons: 1. Breaks with what you'd expect in Julia, IMO, which is that a macro eats the "raw" code. 2. Means that if we want to do stuff like `@reparam` from #220 (and a bunch of others, see #221 for a small list of possibilities), we need touch the compiler rather than just make a small macro that will perform transformations *after* the compiler has done it's job (referring to DynamicPPL compiler here). 3. If the user wants to use a macro on some variables, but they want the actual variable rather than messing around with the sample-statement, they can just separate it into two lines, e.g. `x ~ Normal(); @mymacro ...`. Also, to be completely honest, for the longest time I've just assumed that I'm not even allowed to do `@mymacro x ~ Normal()` and have things work 😅 I bet a lot of people have the same impression by default (though this might of course just not be true:) )
Build failed: |
bors r+ |
At the moment we will actually call `generate_mainbody!` on inputs to macros inside the model, e.g. in a model `@mymacro x ~ Normal()` will actually result in code `@mymacro $(generate_mainbody!(:(x ~ Normal())))` (or something, you get the idea). IMO, this shouldn't be done for the following reasons: 1. Breaks with what you'd expect in Julia, IMO, which is that a macro eats the "raw" code. 2. Means that if we want to do stuff like `@reparam` from #220 (and a bunch of others, see #221 for a small list of possibilities), we need touch the compiler rather than just make a small macro that will perform transformations *after* the compiler has done it's job (referring to DynamicPPL compiler here). 3. If the user wants to use a macro on some variables, but they want the actual variable rather than messing around with the sample-statement, they can just separate it into two lines, e.g. `x ~ Normal(); @mymacro ...`. Also, to be completely honest, for the longest time I've just assumed that I'm not even allowed to do `@mymacro x ~ Normal()` and have things work 😅 I bet a lot of people have the same impression by default (though this might of course just not be true:) )
Part of the motivations for #221 and #222 was so we could add submodels/model-nesting. Well, now we can. Special thanks to @devmotion who reviewed those PRs (several times), improving them significantly, made additional PRs and suggested the current impl of the `@submodel` ❤️ EDIT: We fixed the performance:) This now has zero runtime overhead! See comment-section. EDIT 2: Thanks to @devmotion, we can now alos deal with dynamically specified prefices! - [Motivating example: AR1-prior](#org46a90a5) - [Demos](#org7e05701) - [Can it ever fail?](#org75acb71) - [Benchmarks](#orga99bcf4) <a id="org46a90a5"></a> # Motivating example: AR1-prior ```julia using Turing using DynamicPPL ``` ```julia # Could have made model which samples `num_obs` AR1 samples simulatenously, # but for the sake of showing off dynamic prefixes, we'll only use a vector-implementation. # The matrix implementation will be quite a bit faster too, but oh well. @model function AR1(num_steps, α, μ, σ, ::Type{TV} = Vector{Float64}) where {TV} η ~ MvNormal(num_steps, 1.0) δ = sqrt(1 - α^2) x = TV(undef, num_steps) x[1] = η[1] @inbounds for t = 2:num_steps x[t] = @. α * x[t - 1] + δ * η[t] end return @. μ + σ * x end # Generate an observation σ_obs = 0.1 num_obs = 5 num_steps = 10 ar1 = AR1(num_steps, 0.5, 1.0, 1.0) ys = mapreduce(hcat, 1:num_obs) do i ar1() + σ_obs * randn(num_steps) end ``` 10×5 Matrix{Float64}: 2.30189 0.301618 1.73268 -0.65096 1.46835 2.11187 -1.34878 2.3728 1.02125 3.28422 -0.249064 0.769488 1.34044 3.22175 2.52196 -0.25863 -0.216914 0.528954 3.04756 3.8234 0.372122 0.473511 0.708068 0.76197 0.202003 0.41487 0.759435 1.80162 0.790204 0.12331 1.32585 0.567929 2.74316 1.0874 2.82701 1.84307 1.16138 1.36382 0.735388 1.07423 3.20139 0.75177 1.57236 0.865401 -0.315341 1.22479 1.35688 2.8239 0.597959 0.587955 ```julia @model function demo(y) α ~ Uniform() μ ~ Normal() σ ~ truncated(Normal(), 0, Inf) num_steps = size(y, 1) num_obs = size(y, 2) @inbounds for i = 1:num_obs x = @SubModel $(Symbol("ar1_$i")) AR1(num_steps, α, μ, σ) y[:, i] ~ MvNormal(x, 0.1) end end; m = demo(y); vi = VarInfo(m); ``` ```julia keys(vi) ``` 8-element Vector{VarName{sym, Tuple{}} where sym}: α μ σ ar1_1.η ar1_2.η ar1_3.η ar1_4.η ar1_5.η ```julia vi[@varname α] ``` 0.9383208224122919 ```julia chain = sample(m, NUTS(1_000, 0.8), 3_000); ``` ┌ Info: Found initial step size │ ϵ = 0.025 └ @ Turing.Inference /home/tor/.julia/packages/Turing/rHLGJ/src/inference/hmc.jl:188 Sampling: 100%|█████████████████████████████████████████| Time: 0:04:00 ```julia chain[1001:end, [:α, :μ, :σ], :] ``` Chains MCMC chain (2000×3×1 Array{Float64, 3}): Iterations = 1001:3000 Thinning interval = 1 Chains = 1 Samples per chain = 2000 parameters = α, μ, σ internals = Summary Statistics parameters mean std naive_se mcse ess rhat Symbol Float64 Float64 Float64 Float64 Float64 Float64 α 0.5474 0.1334 0.0030 0.0073 159.6969 0.9995 μ 1.0039 0.2733 0.0061 0.0168 169.9106 1.0134 σ 1.1294 0.1807 0.0040 0.0106 166.8670 0.9998 Quantiles parameters 2.5% 25.0% 50.0% 75.0% 97.5% Symbol Float64 Float64 Float64 Float64 Float64 α 0.2684 0.4625 0.5534 0.6445 0.7861 μ 0.4248 0.8227 1.0241 1.2011 1.4801 σ 0.8781 1.0018 1.0989 1.2239 1.5472 Yay! We recovered the true parameters :tada: ```julia @benchmark $m($vi) ``` BenchmarkTools.Trial: memory estimate: 12.05 KiB allocs estimate: 123 -------------- minimum time: 15.091 μs (0.00% GC) median time: 17.861 μs (0.00% GC) mean time: 19.582 μs (5.23% GC) maximum time: 10.293 ms (99.46% GC) -------------- samples: 10000 evals/sample: 1 <a id="org7e05701"></a> # Demos ```julia using DynamicPPL, Distributions ``` ┌ Info: Precompiling DynamicPPL [366bfd00-2699-11ea-058f-f148b4cae6d8] └ @ Base loading.jl:1317 ```julia @model function demo1(x) x ~ Normal() end; @model function demo2(x, y) @SubModel demo1(x) y ~ Uniform() end false; m2 = demo2(missing, missing); vi2 = VarInfo(m2); keys(vi2) ``` 2-element Vector{VarName{sym, Tuple{}} where sym}: x y ```julia println(vi2[VarName(Symbol("x"))]) println(vi2[VarName(Symbol("y"))]) ``` 0.3069117531180063 0.7325324947386318 We can also `observe` without issues: ```julia @model function demo2(x, y) @SubModel demo1(x) y ~ Normal(x) end false; m2 = demo2(1000.0, missing); vi2 = VarInfo(m2); keys(vi2) ``` 1-element Vector{VarName{:y, Tuple{}}}: y ```julia vi2[@varname y] ``` 1000.3905079427211 ```julia DynamicPPL.getlogp(vi2) ``` -500001.9141252931 But what if the models have the same variable-names?! "Sure, this is cool and all, but can we even use the values from the nested values in the parent model?" ```julia @model function demo_return(x) x ~ Normal() return x end; @model function demo_useval(x, y) x1 = @SubModel sub1 demo_return(x) x2 = @SubModel sub2 demo_return(y) z ~ Normal(x1 + x2 + 100, 1.0) end false; vi = VarInfo(demo_useval(missing, missing)); keys(vi) ``` 3-element Vector{VarName{sym, Tuple{}} where sym}: sub1.x sub2.x z ```julia vi[@varname z] ``` 101.09066854862154 And just to prove a point: ```julia @model function nested(x, y) @SubModel 1 nested1(x, y) y ~ Uniform() end false; @model function nested1(x, y) @SubModel 2 nested2(x, y) y ~ Uniform() end false; @model function nested2(x, y) z = @SubModel 3 nested3(x, y) y ~ Normal(z, 1.0) end false; @model function nested3(x, y) x ~ Uniform() y ~ Normal(-100.0, 1.0) return x + 1000 end false; m = nested(missing, missing); vi = VarInfo(m); keys(vi) ``` 5-element Vector{VarName{sym, Tuple{}} where sym}: 1.2.3.x 1.2.3.y 1.2.y 1.y y ```julia vi[VarName(Symbol("1.2.y"))] ``` 1000.5609156083766 ```julia DynamicPPL.getlogp(vi) ``` -4.620040828101227 <a id="org75acb71"></a> # Can it ever fail? Yeah, if the user doesn't provide the prefix, it can: ```julia @model function nested(x, y) @SubModel nested1(x, y) y ~ Uniform() end false; @model function nested1(x, y) @SubModel nested2(x, y) y ~ Uniform() end false; @model function nested2(x, y) z = @SubModel nested3(x, y) y ~ Normal(z, 1.0) end false; @model function nested3(x, y) x ~ Uniform() y ~ Normal(-100.0, 1.0) return x + 1000 end false; m = nested(missing, missing); vi = VarInfo(m); keys(vi) ``` 2-element Vector{VarName{sym, Tuple{}} where sym}: x y ```julia # Inner-most value is recorded (i.e. the first one reached) vi[@varname y] ``` -100.16836599596732 And it messes up the logp computation: ```julia DynamicPPL.getlogp(vi) ``` -Inf But I could imagine there's a way for us to fix this, or at least warn the user when this happens. <a id="orga99bcf4"></a> # Benchmarks At this point you're probably wondering, "but does it have any overhead (at runtime)?". For a "shallow" nestings, nah, but if you go deep enough there seems to be a tiny bit (likely because we're calling the "constructor" for the model): ```julia using BenchmarkTools @model function base(x, y) x ~ Uniform() y ~ Uniform() y1 ~ Uniform() z = x + 1000 y12 ~ Normal() y123 ~ Normal(-100.0, 1.0) end m1 = base(missing, missing); vi1 = VarInfo(m1); ``` ```julia @model function nested_shallow(x, y) @SubModel 1 nested1_shallow(x, y) y ~ Uniform() end false; @model function nested1_shallow(x, y) x ~ Uniform() y ~ Uniform() z = x + 1000 y12 ~ Normal() y123 ~ Normal(-100.0, 1.0) end false; m2 = nested_shallow(missing, missing); vi2 = VarInfo(m2); ``` ```julia @model function nested(x, y) @SubModel 1 nested1(x, y) y ~ Uniform() end false; @model function nested1(x, y) @SubModel 2 nested2(x, y) y ~ Uniform() end false; @model function nested2(x, y) z = @SubModel 3 nested3(x, y) y ~ Normal(z, 1.0) end false; @model function nested3(x, y) x ~ Uniform() y ~ Normal(-100.0, 1.0) return x + 1000 end m3 = nested(missing, missing); vi3 = VarInfo(m3); ``` ```julia @model function nested_noprefix(x, y) @SubModel nested_noprefix1(x, y) y ~ Uniform() end false; @model function nested_noprefix1(x, y) @SubModel nested_noprefix2(x, y) y1 ~ Uniform() end false; @model function nested_noprefix2(x, y) z = @SubModel nested_noprefix3(x, y) y2 ~ Normal(z, 1.0) end false; @model function nested_noprefix3(x, y) x ~ Uniform() y3 ~ Normal(-100.0, 1.0) return x + 1000 end m4 = nested_noprefix(missing, missing); vi4 = VarInfo(m4); ``` ```julia keys(vi1) ``` 5-element Vector{VarName{sym, Tuple{}} where sym}: x y y1 y12 y123 ```julia keys(vi2) ``` 5-element Vector{VarName{sym, Tuple{}} where sym}: 1.x 1.y 1.y12 1.y123 y ```julia keys(vi3) ``` 5-element Vector{VarName{sym, Tuple{}} where sym}: 1.2.3.x 1.2.3.y 1.2.y 1.y y ```julia keys(vi4) ``` 5-element Vector{VarName{sym, Tuple{}} where sym}: x y3 y2 y1 y ```julia @benchmark $m1($vi1) ``` BenchmarkTools.Trial: memory estimate: 160 bytes allocs estimate: 5 -------------- minimum time: 1.714 μs (0.00% GC) median time: 1.747 μs (0.00% GC) mean time: 1.835 μs (0.00% GC) maximum time: 6.894 μs (0.00% GC) -------------- samples: 10000 evals/sample: 10 ```julia @benchmark $m2($vi2) ``` BenchmarkTools.Trial: memory estimate: 160 bytes allocs estimate: 5 -------------- minimum time: 1.759 μs (0.00% GC) median time: 1.778 μs (0.00% GC) mean time: 1.819 μs (0.00% GC) maximum time: 5.563 μs (0.00% GC) -------------- samples: 10000 evals/sample: 10 ```julia @benchmark $m3($vi3) ``` BenchmarkTools.Trial: memory estimate: 160 bytes allocs estimate: 5 -------------- minimum time: 1.718 μs (0.00% GC) median time: 1.746 μs (0.00% GC) mean time: 1.787 μs (0.00% GC) maximum time: 5.758 μs (0.00% GC) -------------- samples: 10000 evals/sample: 10 ```julia @benchmark $m4($vi4) ``` BenchmarkTools.Trial: memory estimate: 160 bytes allocs estimate: 5 -------------- minimum time: 1.672 μs (0.00% GC) median time: 1.696 μs (0.00% GC) mean time: 1.756 μs (0.00% GC) maximum time: 4.882 μs (0.00% GC) -------------- samples: 10000 evals/sample: 10 Notice that the number of allocations have increased for the deeply nested model. Seems like the Julia compiler isn't too good at inferring the return-types of Turing-models? This seems to be the case too by looking at the lowered code. I haven't given this too much thought yet btw; likely is a way for us to help the compiler here.
At the moment we will actually call
generate_mainbody!
on inputs to macros inside the model, e.g. in a model@mymacro x ~ Normal()
will actually result in code@mymacro $(generate_mainbody!(:(x ~ Normal())))
(or something, you get the idea).IMO, this shouldn't be done for the following reasons:
@reparam
from Reparameterizations #220 (and a bunch of others, see [Merged by Bors] - Small simplification of compiler #221 for a small list of possibilities), we need touch the compiler rather than just make a small macro that will perform transformations after the compiler has done it's job (referring to DynamicPPL compiler here).x ~ Normal(); @mymacro ...
.Also, to be completely honest, for the longest time I've just assumed that I'm not even allowed to do
@mymacro x ~ Normal()
and have things work 😅 I bet a lot of people have the same impression by default (though this might of course just not be true:) )