-
Notifications
You must be signed in to change notification settings - Fork 262
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ensure metadata is in sync with running node during tests #333
Conversation
b2f75c7
to
9fd95f2
Compare
@@ -70,6 +77,13 @@ jobs: | |||
- name: Checkout sources | |||
uses: actions/checkout@v2 | |||
|
|||
- name: Download Substrate | |||
run: | | |||
curl "https://releases.parity.io/substrate/x86_64-debian:stretch/latest/substrate/substrate" --output substrate --location |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't much like the repetition of this curl thing, but the alternate way of sharing artifacts between jobs works out to be about as verbose...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
:/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Main thing is, it would be good to remove reqwest
since we already have ureq
and jsonrpsee
http client dependencies in the workspace.
Also, I remember someone in my presentation (@athei or @dvdplm?) suggesting we could even enable the option for the macro to fetch the metadata from the running substrate node, which may be something we can do in the future for this use case.
test-runtime/Cargo.toml
Outdated
|
||
[build-dependencies] | ||
hex = "0.4.3" | ||
reqwest = { version = "0.11.6", features = ["blocking", "json"] } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we already depend on subxt
above we could avoid this extra (another!) http client dependency. Should be able to use subxt::RpcClient
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point! I'll make use of that or one of the other http deps instead!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, I remember someone in my presentation (@athei or @dvdplm?) suggesting we could even enable the option for the macro to fetch the metadata from the running substrate node, which may be something we can do in the future for this use case.
Yeah this was my suggestion. To download the metadata inside the proc macro.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting! I reckon that's best left for a separate PR if we want it. It does strike me that adding the feature to the macro would only really be beneficial for this sort of testing though, so I actually quite like the idea that ithe test specific logic is kept separate and that the macro doesn't become a little more complex to handle it :)
} | ||
|
||
/// Returns the next open port, or None if no port found in range. | ||
fn next_open_port() -> Option<u16> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder whether we could extract TestNodeProcess
and other test utils to their own crate. Not only could we reuse the node spawning logic, but we could publish that crate for use by other projects for testing their nodes. I would certainly use it (this code is originally copied from cargo-contract
).
Just an idea, don't need to do it here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had a very similar thought doing this! I didn't really fancy the amount of churn in this PR, but I can imagine it would be useful!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will be worth doing, separate PR probably better - must remember to remove the duplication here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why aren't we just binding to 0.0.0.0:0
instead and letting the OS find the next free port?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is copied from code I wrote in haste, so most likely not the best way to do it. If that works then let's do it.
Edit, we can probably use that or similar since this code was originally designed for all the tests which spin up in parallel so are all attempting to claim ports
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do like the 0.0.0.0:0
approach normally; the complexity is that you then need to read the actual port used back to connect to the service, so I think I'd need to monitor the stdout (unless you have a better idea; that's what I did in telemetry anyway?).
In the other tests, we need to grab 3 ports by the looks of things, so it is a little trickier!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But yes, in this build script, it's probably a little more robust to scan for a port than rely on any specific stdout from substrate on the matter (but if there's a better way to see which port was used, that'd be great!).
test-runtime/build.rs
Outdated
let res = | ||
subxt::RpcClient::try_from_url(&format!("http://localhost:{}", port)) | ||
.await | ||
.unwrap() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might panic if it can't connect
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yes, I think I took it to be if the url was malformed, and was surprised to then have to move it after the await..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I did have a perusal through the code, and it's all OK as is (if the url is WS, it tries to establish a connection (hence the await), but if it's HTTP it won't try to connect and will only error if not an http(s) URL).
I'll change to an expect to document this!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, I think we should kill http
anyway because it won't work for subscriptions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other than the unwrap()
LGTM
@@ -70,6 +77,13 @@ jobs: | |||
- name: Checkout sources | |||
uses: actions/checkout@v2 | |||
|
|||
- name: Download Substrate | |||
run: | | |||
curl "https://releases.parity.io/substrate/x86_64-debian:stretch/latest/substrate/substrate" --output substrate --location |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
:/
name = "test-runtime" | ||
version = "0.1.0" | ||
edition = "2021" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A description
would be good here.
The logic for this crate exists mainly in the `build.rs` file. | ||
|
||
At compile time, this crate will: | ||
- Spin up a local `substrate` binary (set the `SUBSTRATE_NODE_PATH` env var to point to a custom binary, otehrwise it'll look for `substrate` on your PATH). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Spin up a local `substrate` binary (set the `SUBSTRATE_NODE_PATH` env var to point to a custom binary, otehrwise it'll look for `substrate` on your PATH). | |
- Spin up a local `substrate` binary (set the `SUBSTRATE_NODE_PATH` env var to point to a custom binary, otherwise it'll look for `substrate` on your PATH). |
|
||
At compile time, this crate will: | ||
- Spin up a local `substrate` binary (set the `SUBSTRATE_NODE_PATH` env var to point to a custom binary, otehrwise it'll look for `substrate` on your PATH). | ||
- Obtain metadata from this node. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a thought: perhaps the substrate node should have a sub-command metadata dump /path/to/metadata-dumpfile.scale
that does this work? Feels a bit awkward to start a full node and run queries against it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good idea
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume that the metadata dump
command would itself start the substrate node and such. wrapping this logic into a separate command that's useful on its own does sound like a good idea to me too!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
…#333) * add test-runtime crate to dynamically obtain metadata/node_runtime for tests * cargo fmt * Download substrate prior to cargo calls (hopefully) * add README explaining test-runtime * Fix CI, fmt and clippy * more clippy * tweak test-node readme * fmt the clippied * A little tidy up in build.rs * use ureq and raw string * Don't export unnecessary metadata * async_std/RpcClient/bytes instead of ureq/Value/hex * newline * document try_from_url error unwrap
Adds a
test-runtime
crate with the sole purpose of acquiring metadata from the current available substrate binary, and generating/exposing the subxt node runtime types from it, to use during tests.This ensures that the metadata we're using is always in sync with the substrate runtime the integration tests are running against.
A next step would be to pick a pinned version of substrate to run during CI that we know we're happy with, and to run a nightly pipeline using latest substrate to spot compatibility breakages.
Closes #329