Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(backend): add support for arbitrary provider requests with AnyRequest #32

Merged
merged 4 commits into from
Nov 27, 2024

Conversation

dutterbutter
Copy link
Contributor

Motivation

  • Enable flexible and agnostic request handling in SharedBackend to support diverse use cases
  • This opens SharedBackend up to allow for other implementations to use without requiring changes to the core foundry-fork-db. For example, foundry-zksync is currently using a fork of this repo for its own needs, specifically implements ZkSyncMiddleware to facilitate bytecode retrieval via zks_getBytecodeByHash. These changes will eliminate the necessity of that fork while also serving other parties needs outside of foundry-zksync.

Solution

  • src/backend.rs: Added the AnyRequestFuture struct and implemented the WrappedAnyRequest trait to handle arbitrary requests.
  • src/backend.rs: Updated the ProviderRequest and BackendRequest enums to include the AnyRequest variant.
  • src/backend.rs: Modified the SharedBackend struct to include the do_any_request method for executing arbitrary requests.
  • src/error.rs: Added the AnyRequest variant to the DatabaseError enum for error handling.
  • Updated tests to include a new test case shared_backend_any_request for verifying the arbitrary request handling feature. This test spins up an in-memory HTTP server to simulate requests.

PR Checklist

  • Added Tests
  • Added Documentation
  • Breaking changes

@dutterbutter dutterbutter force-pushed the db/add-custom-request-backend branch from 4601be7 to 1647268 Compare November 25, 2024 16:58
Copy link
Member

@mattsse mattsse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm,

this doesn't hurt,
I think what we could do is adding support for caching arbitrary endpoint data, so that the bytecode responses get cached

Comment on lines +812 to +826
pub fn do_any_request<T, F>(&mut self, fut: F) -> DatabaseResult<T>
where
F: Future<Output = Result<T, eyre::Report>> + Send + 'static,
T: fmt::Debug + Send + 'static,
{
self.blocking_mode.run(|| {
let (sender, rx) = oneshot_channel::<Result<T, eyre::Report>>();
let req = BackendRequest::AnyRequest(Box::new(AnyRequestFuture {
sender,
future: Box::pin(fut),
}));
self.backend.unbounded_send(req)?;
rx.recv()?.map_err(|err| DatabaseError::AnyRequest(Arc::new(err)))
})
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the way this is implemented kinda acts as tokio::Handle::block_on but it sends the request to the async backend, I think this could be quite useful, especially if we extend this with caching functionality, because rn this does not persist cache the responses.

we can add this feature separately though

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will follow-up accordingly.

Comment on lines +80 to +82
trait WrappedAnyRequest: Unpin + Send + fmt::Debug {
fn poll_inner(&mut self, cx: &mut Context<'_>) -> Poll<()>;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, this is basically a helper, alternatively we could use BoxFuture, but this would require a manual debug impl on the request enum, so this seems fine

@mattsse mattsse merged commit d90d227 into foundry-rs:main Nov 27, 2024
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants