-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Oblackmon3/rebase polymer develop to match v0.0.25 #5
base: polymer-develop
Are you sure you want to change the base?
Oblackmon3/rebase polymer develop to match v0.0.25 #5
Conversation
… is properly implemented
…vements, various bugs
) * fix: checking param for blockHash or blockNumber obj * fix: conditional logic for eth_call caching * fix: conditional logic for eth__getProof and eth_getStorageAt caching * fix: rm redundant len check for blockHash param * fix: set error as nil for conditional checking of params * fix: handle blockHash if param is string * test: eth_call and eth_getProof blockHash/blockNumber object as param * fix: NormalizeEvmHttpJsonRpc for eth_getStorageAt method to support blockNumber as obj
WalkthroughA new GitHub Actions workflow named Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant GitHubActions
participant ContainerRegistry
participant Upstream
User->>GitHubActions: Trigger prune workflow
GitHubActions->>ContainerRegistry: Authenticate
GitHubActions->>ContainerRegistry: Fetch multi-platform package SHAs
GitHubActions->>ContainerRegistry: Apply retention policy (dry-run)
GitHubActions->>User: Workflow completed
Possibly related PRs
Suggested reviewers
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 36
🧹 Outside diff range and nitpick comments (65)
util/headers.go (1)
Line range hint
11-20
: Function body updates are correct, with a minor optimization possible.The changes to iterate over
r.Header
and user.Header.Get(k)
are correct and consistent with the new function signature. However, a minor optimization is possible.Consider this optimization to avoid calling
strings.ToLower(k)
twice:func ExtractUsefulHeaders(r *http.Response) map[string]interface{} { var result = make(map[string]interface{}) for k := range r.Header { - kl := strings.ToLower(k) + kl := strings.ToLower(k) if strings.HasPrefix(kl, "x-") || strings.Contains(kl, "trace") || strings.Contains(kl, "debug") || strings.Contains(kl, "correlation") || strings.Contains(kl, "-id") || kl == "content-type" || kl == "content-length" { - result[kl] = r.Header.Get(k) + result[kl] = r.Header.Get(k) } } return result }This change stores the lowercase key in
kl
and uses it consistently, avoiding an extra call tostrings.ToLower()
.util/redact.go (1)
Line range hint
1-34
: Consider the following improvements to enhance robustness and maintainability:
The use of a fixed 5-character hash substring might not provide sufficient uniqueness for a large number of endpoints. Consider making this length configurable or using a longer substring.
The different handling of URL schemes could potentially be simplified. For example:
var redactedEndpoint string switch { case parsedURL.Scheme == "http" || parsedURL.Scheme == "https" || parsedURL.Scheme == "ws" || parsedURL.Scheme == "wss": redactedEndpoint = fmt.Sprintf("%s://%s#redacted=%s", parsedURL.Scheme, parsedURL.Host, hash[:5]) case strings.HasSuffix(parsedURL.Scheme, "envio"): redactedEndpoint = fmt.Sprintf("%s://%s", parsedURL.Scheme, parsedURL.Host) default: redactedEndpoint = fmt.Sprintf("%s#redacted=%s", parsedURL.Scheme, hash[:5]) }
- Consider adding more informative error messages when URL parsing fails, to aid in debugging.
Would you like me to propose a refactored version of the entire function incorporating these suggestions?
.github/workflows/prune.yml (1)
23-27
: Consider quoting the variable in the shell commandTo prevent potential word splitting issues, consider quoting the
$erpc
variable when setting the GitHub output:- echo "multi-arch-digests=$erpc" >> $GITHUB_OUTPUT + echo "multi-arch-digests='$erpc'" >> $GITHUB_OUTPUTThis ensures that the entire content of
$erpc
is treated as a single value, even if it contains spaces.🧰 Tools
🪛 actionlint
25-25: shellcheck reported issue in this script: SC2086:info:2:36: Double quote to prevent globbing and word splitting
(shellcheck)
test/k6/run.js (2)
8-17
: Approve the new load testing configuration with a suggestion.The new
scenarios
configuration provides more precise control over the load testing process. The constant arrival rate of 500 requests per second for 30 minutes with up to 1000 VUs is a significant load test.Consider adding a comment explaining the rationale behind these specific values (e.g., why 500 requests/second for 30 minutes) to improve maintainability and help other team members understand the test design.
55-61
: Approve the improved error handling with a suggestion for enhancement.The addition of the try-catch block significantly improves error handling and debugging capabilities. Logging both the error message and the response body will be very helpful for troubleshooting issues.
Consider using a structured logging format (e.g., JSON) for the error message and response body. This would make it easier to parse and analyze logs, especially when dealing with a large volume of test results. For example:
console.log(JSON.stringify({ error: 'Unmarshal error', message: e.message, responseBody: r.body }));docs/pages/deployment/railway.mdx (1)
33-49
: Excellent addition of usage instructions.The new "Usage in your services" section provides valuable information on how to integrate eRPC into services using the private network. The code snippet is clear and demonstrates the correct usage.
Consider the following minor grammatical improvements:
- Line 35: Add a comma after "overhead" for better readability.
- Line 35: Add the article "a" before "private network" for grammatical correctness.
Suggested revision:
-To reduce cost and overhead use private network (`.railway.internal`) to connect to eRPC, from your backend services (such as indexers or mev bots): +To reduce cost and overhead, use a private network (`.railway.internal`) to connect to eRPC from your backend services (such as indexers or mev bots):🧰 Tools
🪛 LanguageTool
[uncategorized] ~35-~35: A comma might be missing here.
Context: ...ge in your services To reduce cost and overhead use private network (`.railway.internal...(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
[uncategorized] ~35-~35: You might be missing the article “a” here.
Context: ...rvices To reduce cost and overhead use private network (.railway.internal
) to connec...(AI_EN_LECTOR_MISSING_DETERMINER_A)
docs/pages/operation/production.mdx (4)
5-21
: Valuable addition, but needs minor improvementsThe new "Memory usage" section provides crucial information for production deployment. However, there are some areas for improvement:
Line 7: Consider rephrasing for clarity and grammar:
"The biggest contributor to memory usage in eRPC is the size of responses to your requests."Line 7: Add "the" before "size" as suggested by static analysis.
Line 7: Fix typo "occesional" to "occasional".
Line 9: Add "the" before "majority" as suggested by static analysis.
Line 15: Consider adding a brief explanation of what GOGC does for clarity.
Line 18-20: Consider rephrasing the GOMEMLIMIT explanation for clarity.
Here's a suggested revision for lines 18-20:
# This flag sets a soft memory limit for Go's garbage collector. # IMPORTANT: Setting this value too low might cause frequent GC runs, # potentially impacting performance without significant memory benefits.These changes will improve readability and address the grammar issues highlighted by static analysis.
🧰 Tools
🪛 LanguageTool
[uncategorized] ~6-~6: A determiner appears to be missing. Consider inserting it.
Context: ...g eRPC in production. ## Memory usage Biggest memory usage contributor in eRPC is siz...(AI_EN_LECTOR_MISSING_DETERMINER)
[uncategorized] ~7-~7: You might be missing the article “the” here.
Context: ...est memory usage contributor in eRPC is size of responses of your requests. For exam...(AI_EN_LECTOR_MISSING_DETERMINER_THE)
[grammar] ~9-~9: An article may be missing.
Context: ...igh RPS of large request/responses. In majority of use-cases eRPC uses around 256MB of mem...(BUNCH_OF)
27-28
: Minor grammatical improvement neededThe changes to this section enhance clarity regarding network-level retry configuration. However, there's a small grammatical issue:
Line 27: Add "The" at the beginning of the sentence to address the missing determiner issue highlighted by static analysis.
Suggested revision:
- The recommendation is to configure `maxCount` to be equal to the number of upstreams.This change will improve the grammatical structure of the sentence.
🧰 Tools
🪛 LanguageTool
[uncategorized] ~27-~27: A determiner appears to be missing. Consider inserting it.
Context: ...m, network-level retry is still useful. Recommendation is to configuremaxCount
to be equal ...(AI_EN_LECTOR_MISSING_DETERMINER)
32-32
: Improved explanation, minor style adjustments neededThe updated explanation of the hedge policy provides better clarity on its purpose and implications. However, there are two minor style issues to address:
- Remove the hyphen in "highly-recommended" as it's redundant with the "-ly" ending.
- Add a comma after "For example" to improve readability.
Suggested revision:
[Hedge policy](/config/failsafe#hedge-policy) is **highly recommended** if you prefer "fast response as soon as possible". For example, setting `500ms` as "delay" will make sure if upstream A did not respond under 500 milliseconds, simultaneously another request to upstream B will be fired, and eRPC will respond back as soon as any of them comes back with result faster. Note: since more requests are sent, it might incur higher costs to achieve the "fast response" goal.These changes will improve the style and readability of the paragraph.
🧰 Tools
🪛 LanguageTool
[uncategorized] ~32-~32: The hyphen in highly-recommended is redundant.
Context: ...cy](/config/failsafe#hedge-policy) is highly-recommended if you prefer "fast response as soon ...(ADVERB_LY_HYPHEN_FIX)
[typographical] ~32-~32: After the expression ‘for example’ a comma is usually used.
Context: ...fast response as soon as possible". For example setting500ms
as "delay" will make su...(COMMA_FOR_EXAMPLE)
36-36
: Typo fixed, minor grammatical improvement neededThe typo correction from "compeltely" to "completely" improves the text. However, there's a small grammatical improvement to make:
Add "the" before "database" to improve the sentence structure.
Suggested revision:
Storing cached RPC responses requires high storage for read-heavy use-cases such as indexing 100m blocks on Arbitrum. eRPC is designed to be robust towards cache database issues, so even if the database is completely down it will not impact the RPC availability.This change will address the grammatical issue highlighted by static analysis and improve the overall readability of the sentence.
🧰 Tools
🪛 LanguageTool
[uncategorized] ~36-~36: Possible missing comma found.
Context: ...equires high storage for read-heavy use-cases such as indexing 100m blocks on Arbitru...(AI_HYDRA_LEO_MISSING_COMMA)
[uncategorized] ~36-~36: You might be missing the article “the” here.
Context: ...wards cache database issues, so even if database is completely down it will not impact t...(AI_EN_LECTOR_MISSING_DETERMINER_THE)
vendors/quicknode.go (1)
54-56
: Approved: Improved error handling specificityThe changes enhance the error handling by using a more generic error type
NewErrEndpointRequestTooLarge
and providing more specific information withcommon.EvmBlockRangeTooLarge
. This improvement aligns well with the goal of refining error handling.Consider updating the error message in
common.JsonRpcErrorEvmLogsLargeRange
to reflect the new error type, ensuring consistency across the error handling process. For example:common.JsonRpcErrorEvmLogsLargeRange = "Request too large: EVM logs range exceeds limit".github/workflows/release.yml (3)
26-26
: LGTM: Added conditional execution for tagging stepsThe addition of conditional checks for the tagging steps is a good improvement. This allows the workflow to handle both tagged releases and automatic builds from the main branch.
Consider using a single condition at the job level instead of individual step conditions for better readability:
tag: if: github.event.inputs.version_tag != '' runs-on: ubuntu-latest steps: # ... rest of the steps without individual conditionsThis change would make the job's purpose clearer and reduce repetition in the workflow file.
Also applies to: 33-33, 39-39
49-49
: LGTM: Added conditional execution for release stepsThe addition of conditional checks for the release steps is consistent with the changes in the 'tag' job and provides good flexibility in the workflow.
As suggested for the 'tag' job, consider using a single condition at the job level for better readability:
release: if: github.event.inputs.version_tag != '' runs-on: ubuntu-latest needs: tag steps: # ... rest of the steps without individual conditionsThis change would make the job's purpose clearer and reduce repetition in the workflow file.
Also applies to: 56-56, 62-62
103-128
: LGTM: Improved Docker build and push stepsThe separation of build steps for tagged and untagged builds is a good improvement. It allows for different tagging strategies and provides flexibility in the release process. The use of the short SHA in the build arguments is also a good practice for better readability.
Consider adding a comment explaining the purpose of each build step to improve maintainability:
# Build and push Docker image for untagged (main branch) builds - name: Build and push Docker image from main if: github.event.inputs.version_tag == '' # ... rest of the step # Build and push Docker image for tagged releases - name: Build and push Docker image with tags if: github.event.inputs.version_tag != '' # ... rest of the stepdocs/pages/config/rate-limiters.mdx (2)
71-71
: Minor grammatical improvement suggested.Consider revising the sentence for better clarity:
- The auto-tuner is enabled by default when an upstream has any rate limit budget defined. + The auto-tuner is enabled by default when any rate limit budget is defined for an upstream.🧰 Tools
🪛 LanguageTool
[grammar] ~71-~71: A noun may be missing here.
Context: ...e auto-tuner is enabled by default when an upstream has any rate limit budget defined. Here...(DT_JJ_NO_NOUN)
91-106
: LGTM: Clear explanation of auto-tuner operation and default configuration.The explanation of how the auto-tuner works is clear and informative. Providing the default configuration is very helpful for users. To further enhance this section, consider adding a brief note on how users can override these defaults in their own configuration.
Consider adding a brief example of how to override a specific default value, such as:
rateLimitAutoTune: errorRateThreshold: 0.05 # Override the default valueThis would provide a concrete example of how users can customize the auto-tuner settings.
docs/pages/operation/batch.mdx (1)
7-14
: Excellent addition of important information about JSON-RPC batching drawbacks.The new
Callout
component provides crucial context for users regarding the potential issues with JSON-RPC batching in EVM. This information is well-placed at the beginning of the document, ensuring visibility and helping users make informed decisions about using batch requests.Consider adding a brief sentence at the end of the
Callout
to guide users on when batching might still be beneficial, e.g., "However, batching may still be useful in specific scenarios with high-latency networks or when dealing with a large number of small, independent requests."cmd/erpc/main_test.go (1)
160-160
: Approve: Simplified JSON handling in mock responseThe change from
json.RawMessage(...)
to[]byte(...)
is a good simplification. It removes the dependency on theencoding/json
package for this test, which aligns with the broader changes in the PR. This modification maintains the same functionality while potentially offering a slight performance improvement.Consider applying this change consistently across all test files that use
gock
for mocking HTTP responses, if not already done. This will ensure a uniform approach to handling JSON data in tests.common/response.go (4)
108-128
: Great implementation of lazy loading and thread-safe parsing!The
JsonRpcResponse
method is well-implemented with atomic operations and proper nil checks. The lazy loading approach is efficient and prevents unnecessary parsing.However, there's one potential improvement:
Consider closing the
body
after reading to prevent resource leaks:if r.body != nil { err := jrr.ParseFromStream(r.body, r.expectedSize) + r.body.Close() if err != nil { return nil, err } }
This ensures that the body is always closed after it's read, preventing potential resource leaks.
Line range hint
190-210
: Improved thread safety, but consider lock optimizationThe changes to
IsObjectNull
enhance thread safety through atomic operations and additional locking. This prevents race conditions and ensures consistent access to shared data.However, the multiple locks (RLock on
resultMu
anderrMu
) might impact performance and could potentially lead to deadlocks if not managed carefully.Consider optimizing the locking strategy:
- Combine the locks into a single operation if possible.
- If separate locks are necessary, ensure they're always acquired in the same order across the codebase to prevent deadlocks.
Example of combining locks:
func (r *NormalizedResponse) IsObjectNull() bool { if r == nil { return true } jrr, _ := r.JsonRpcResponse() if jrr == nil { return true } jrr.resultMu.RLock() jrr.errMu.RLock() defer jrr.resultMu.RUnlock() defer jrr.errMu.RUnlock() if len(jrr.Result) == 0 && jrr.Error == nil && jrr.ID() == 0 { return true } return false }This approach reduces the number of lock/unlock operations and simplifies the code.
Line range hint
214-245
: Improved thread safety and efficiency, but fix orphaned unlockThe changes to
EvmBlockNumber
enhance thread safety through atomic operations and implement efficient lazy loading of the block number. These improvements prevent race conditions and unnecessary calculations.However, there's an issue that needs to be addressed:
There's an orphaned
r.RUnlock()
call on line 220 that doesn't have a correspondingRLock()
. This could lead to a runtime panic if the mutex is not locked when this method is called.To fix this, either remove the orphaned
RUnlock()
or add the correspondingRLock()
if locking is necessary:func (r *NormalizedResponse) EvmBlockNumber() (int64, error) { if r == nil { return 0, nil } if n := r.evmBlockNumber.Load(); n != 0 { return n, nil } - r.RUnlock() // ... rest of the method }
Please review and adjust the locking strategy as needed to ensure proper synchronization.
269-283
: Excellent addition of resource managementThe new
Release
method is a valuable addition for proper resource management:
- It ensures that the response body is closed, preventing resource leaks.
- It clears the JSON-RPC response, potentially freeing up memory.
- The use of locking prevents concurrent access issues.
These practices contribute to better memory management and resource cleanup.
Consider improving the error handling for the body closure:
if r.body != nil { err := r.body.Close() if err != nil { - log.Error().Err(err).Interface("response", r).Msg("failed to close response body") + log.Warn().Err(err).Msg("failed to close response body") } r.body = nil }This change:
- Uses
Warn
instead ofError
as failing to close the body is not typically a critical error.- Removes the
Interface("response", r)
to avoid potentially logging sensitive information.- Simplifies the error message for clarity.
test/fake_server.go (5)
Line range hint
39-40
: LGTM: Enhanced rate limiting simulation.The addition of
LimitedRate
andrequestsLimited
fields, along with the updatedNewFakeServer
function, improves the server's ability to simulate rate limiting scenarios. This is a valuable enhancement for testing purposes.Consider adding a comment explaining the purpose of the
LimitedRate
field for better code documentation.Also applies to: 52-53
107-134
: LGTM: Improved request handling logic.The rewrite of the
handleRequest
method significantly enhances the server's capability to handle both single and batch JSON-RPC requests. The error handling is more robust, and the method correctly distinguishes between different request types.Consider adding a debug log statement at the beginning of the method to log the incoming request body. This could be helpful for troubleshooting in test environments.
158-173
: LGTM: Robust single request handling.The
handleSingleRequest
method is well-implemented, with proper error handling and correct setting of response headers. The use of a buffer for encoding before writing the response is a good practice.Consider using
http.StatusInternalServerError
instead of the magic number-32000
for better readability and consistency with HTTP standards.
Line range hint
175-236
: LGTM: Enhanced request processing with improved validations.The
processSingleRequest
method now includes important validations for the JSON-RPC version and method name. The addition of rate limiting simulation and more detailed error responses significantly improves the server's ability to mimic real-world scenarios.Consider extracting the magic numbers for error codes (e.g., -32600, -32000, -32001) into named constants for better maintainability and readability.
264-275
: LGTM: Support for large fake responses.The addition of support for generating large fake responses in the
findMatchingSample
method is a valuable feature for performance testing. The implementation correctly handles the "bigfake:" prefix and potential parsing errors.Consider using a more efficient method to generate the large random string, such as using a pre-generated set of random bytes and repeating it to reach the desired size. This could be more performant for very large responses.
common/request.go (2)
Line range hint
171-196
: LGTM: Enhanced ApplyDirectivesFromHttp methodThe
ApplyDirectivesFromHttp
method has been significantly improved:
- The rename and additional
queryArgs
parameter enhance flexibility.- The use of
util.Mem2Str
aligns with new memory manipulation functions.- The addition of query parameter handling provides more ways to apply directives.
These changes improve the method's functionality and align with updates in other parts of the codebase.
Consider updating the method's documentation to reflect these changes, especially the new query parameter handling.
Add unit tests for the
Id
methodNo specific unit tests were found for the
Id
method incommon/request.go
. Please add unit tests to cover different ID representations (string, float, int) and edge cases to ensure robust functionality.🔗 Analysis chain
Line range hint
109-155
: LGTM: Improved Id method with better type handling and thread safetyThe
Id
method has been significantly improved:
- The return type change to
int64
aligns with updates in other structures.- The use of atomic operations enhances thread safety.
- The updated logic for ID extraction is more robust, handling various ID representations (string, float, int).
To ensure the robustness of the new ID extraction logic, consider adding unit tests:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for unit tests covering the Id method # Test: Look for existing tests rg -g '*_test.go' 'func Test.*Id\(' # If no specific tests found, suggest adding them if [ $? -ne 0 ]; then echo "No specific tests found for the Id method. Consider adding unit tests to cover different ID representations (string, float, int) and edge cases." fiLength of output: 397
common/evm_block_ref.go (5)
106-156
: Enhanced parameter handling for multiple methods.This addition greatly improves the function's flexibility by handling both string and map types for the second parameter. It correctly differentiates between block hash and block number, with comprehensive error handling.
Consider extracting the common logic for handling string parameters into a separate function to reduce code duplication and improve readability. For example:
func handleStringParam(param string) (string, int64, error) { if strings.HasPrefix(param, "0x") { if len(param) == 66 { return param, 0, nil } bni, err := HexToInt64(param) if err != nil { return param, 0, err } return strconv.FormatInt(bni, 10), bni, nil } return "", 0, nil }This function could then be called in both the string and map handling cases.
187-228
: Improved parameter handling for "eth_getProof" and "eth_getStorageAt" methods.This addition enhances the function's capability by handling both string and map types for the third parameter in these methods. It correctly differentiates between block hash and block number, with comprehensive error handling.
As suggested in the previous comment, consider extracting the common logic for handling string parameters into a separate function to reduce code duplication and improve readability. This suggestion applies here as well, as the logic is very similar to the previous case.
253-270
: Improved response data extraction for transaction-related methods.This modification enhances the extraction of block reference and block number from the response using the safer
PeekStringByPath
method. The logic for determining the block reference when it's empty but a valid block number exists is sound.Consider adding error handling for the case where both
blockRef
andblockNumber
are empty or invalid. This could help in identifying potential issues with the response data. For example:if blockRef == "" && blockNumber == 0 { return "", 0, errors.New("both blockRef and blockNumber are empty or invalid") }
273-291
: Enhanced response data extraction for "eth_getBlockByNumber" method.This modification improves the extraction of block reference and block number from the response using the safer
PeekStringByPath
method. The logic for determining the block reference when it's empty but a valid block number exists is sound.As suggested in the previous comment, consider adding error handling for the case where both
blockRef
andblockNumber
are empty or invalid. This could help in identifying potential issues with the response data. For example:if blockRef == "" && blockNumber == 0 { return "", 0, errors.New("both blockRef and blockNumber are empty or invalid") }
Line range hint
1-297
: Overall improvements to block reference extraction and parameter handling.The changes in this file significantly enhance the robustness and flexibility of the
ExtractEvmBlockReferenceFromRequest
andExtractEvmBlockReferenceFromResponse
functions. The improvements include:
- Better handling of block hashes and block numbers.
- Enhanced flexibility in parameter handling for various Ethereum JSON-RPC methods.
- Improved extraction of block references and numbers from responses.
- Use of safer methods to access response data.
These changes will likely result in more reliable and efficient processing of Ethereum JSON-RPC requests and responses.
To further improve the maintainability of this code:
- Consider creating separate functions for handling different parameter types to reduce code duplication.
- Implement consistent error handling across all methods to ensure uniform behavior and easier debugging.
- Consider creating constants for magic numbers and frequently used strings (e.g., method names, JSON keys) to improve readability and reduce the risk of typos.
docs/pages/config/example.mdx (1)
77-77
: Approve addition ofmaxTimeout
with suggestion for documentation.The addition of
maxTimeout: 30s
in theserver
section is a good improvement to the configuration options. This allows users to set a maximum duration for server requests, which can help prevent long-running requests from consuming too many resources.Consider adding a brief explanation of the
maxTimeout
option in the comments or surrounding text to help users understand its purpose and impact. For example:server: # ... other options ... # maxTimeout: Sets the maximum duration for server requests to prevent long-running operations maxTimeout: 30supstream/http_json_json_client_test.go (2)
176-183
: Improved test robustness and error handling.The changes to the
SeparateBatchRequestsWithSameIDs
test function enhance its reliability:
- The endpoint URL update is consistent and likely reflects a change in the test environment.
- The new response handling with null checks and full JSON comparison improves the test's accuracy.
Consider wrapping the
panic
call in at.Fatal
or similar test failure method to provide more context in case of a test failure:- panic(fmt.Sprintf("SeparateBatchRequestsWithSameIDs: resp1 is nil err1: %v", err1)) + t.Fatalf("SeparateBatchRequestsWithSameIDs: resp1 is nil err1: %v", err1)Also applies to: 200-221
609-610
: Adjusted timeout assertion range for improved test stability.The timeout assertion range in the
BatchRequestTimeout
test has been broadened from 745-755 milliseconds to 740-780 milliseconds. This change accommodates slight variations in timing, which can improve test stability across different environments.Consider extracting these timing values into constants at the top of the test file. This would make it easier to adjust these values in the future if needed:
const ( minTimeoutDuration = 740 * time.Millisecond maxTimeoutDuration = 780 * time.Millisecond ) // Then in the test: assert.Greater(t, dur, minTimeoutDuration) assert.Less(t, dur, maxTimeoutDuration)upstream/http_json_rpc_client.go (5)
204-204
: LGTM: Improved batch processing with enhanced logging and error handlingThe changes in the
processBatch
method significantly improve the batch processing mechanism:
- Added logging for batch processing, enhancing observability.
- Enhanced error handling and response processing, providing more robust error management.
- Use of
common.SonicCfg.Marshal
for request body serialization, suggesting a standardized approach to JSON serialization.These improvements will lead to better debugging capabilities and more reliable batch processing.
Consider adding a check for
nil
error before accessing it in the error handling section:if err != nil { if errors.Is(err, context.DeadlineExceeded) { // ... (existing code) } else { // ... (existing code) } return }This will prevent potential panic if
err
is unexpectedlynil
.Also applies to: 224-224, 236-251, 262-274, 278-296
301-406
: LGTM: Significantly improved response processingThe changes in the
processBatchResponse
method represent a major improvement in response handling:
- Use of
ast.NewSearcher
for JSON parsing, allowing for more efficient and flexible JSON handling.- Enhanced error handling covering various response scenarios, improving robustness.
- Introduction of
getJsonRpcResponseFromNode
function for parsing JSON-RPC responses from AST nodes.These improvements will lead to more reliable and efficient batch response processing.
Consider adding more detailed logging for unexpected response types:
if rootNode.TypeSafe() != ast.V_ARRAY && rootNode.TypeSafe() != ast.V_OBJECT { c.logger.Error(). Str("responseType", rootNode.TypeSafe().String()). Str("response", util.Mem2Str(bodyBytes)). Msg("Unexpected response type (not array nor object)") // ... (existing error handling code) }This will provide more context for debugging unexpected response types.
408-443
: LGTM: New function for flexible JSON-RPC response parsingThe new
getJsonRpcResponseFromNode
function is a valuable addition:
- Enhances flexibility in parsing JSON-RPC responses from AST nodes.
- Handles both result and error scenarios effectively.
- Creates custom errors for parsing exceptions, improving error reporting.
This function will contribute to more robust and maintainable JSON-RPC response handling.
Consider adding a check for
nil
nodes before accessing them:if idNode == nil || resultNode == nil || errorNode == nil { return nil, fmt.Errorf("invalid JSON-RPC response structure") }This will prevent potential panics if the JSON structure is unexpected.
461-468
: LGTM: Improved single request handling and error managementThe changes in the
sendSingleRequest
method enhance the request sending process:
- Use of
common.SonicCfg.Marshal
for request serialization, aligning with the standardized JSON serialization approach.- Enhanced error handling with
common.NewErrEndpointTransportFailure
, providing more specific error types.- Addition of
ErpcVersion
andErpcCommitSha
in the User-Agent header, improving request tracing capabilities.These improvements contribute to more robust and traceable request handling.
Consider adding a timeout to the context for the HTTP request:
ctx, cancel := context.WithTimeout(ctx, 30*time.Second) defer cancel() httpReq, errReq := http.NewRequestWithContext(ctx, "POST", c.Url.String(), bytes.NewBuffer(requestBody))This ensures that the request doesn't hang indefinitely if the server doesn't respond.
Also applies to: 496-496, 499-502
554-554
: LGTM: Significantly enhanced error handling and type inferenceThe changes in the
extractJsonRpcError
function greatly improve the error handling capabilities:
- Enhanced error type inference based on status codes and error messages, allowing for more specific error reporting.
- Addition of new error types like
ErrEndpointRequestTooLarge
andErrEndpointBillingIssue
, providing more granular error categorization.- Improved handling of EVM-specific errors, enhancing the client's ability to handle blockchain-specific issues.
These improvements will lead to more accurate error reporting and easier debugging for upstream systems.
Consider refactoring the long series of
if
statements into a separate function or using a map of error patterns to error types. This could improve readability and maintainability:var errorPatterns = map[string]func(code int, message string, details map[string]interface{}) error { "requests limited to": func(code int, message string, details map[string]interface{}) error { return common.NewErrEndpointCapacityExceeded(/* ... */) }, // ... other patterns } for pattern, errorFunc := range errorPatterns { if strings.Contains(err.Message, pattern) { return errorFunc(int(code), err.Message, details) } }This approach could make the error handling logic more modular and easier to extend.
Also applies to: 562-853
erpc/http_server_test.go (2)
Line range hint
845-938
: Remove duplicate createServerTestFixtures function.The createServerTestFixtures function appears to be duplicated in the file. This is likely an unintentional copy-paste error.
Please remove this duplicate function to maintain code cleanliness and prevent potential confusion.
Line range hint
940-1048
: Remove duplicate TestHttpServer_MultipleUpstreams function.The TestHttpServer_MultipleUpstreams function appears to be duplicated in the file. This is likely an unintentional copy-paste error.
Please remove this duplicate function to maintain code cleanliness and prevent potential confusion. Ensure that any unique test cases from this duplicate are merged into the original function if necessary.
test/samples/evm-json-rpc.json (1)
Line range hint
1-1451
: Consider improving file organization for better readabilityThe file structure is consistent and the new
debug_traceTransaction
method integrates well with existing methods. However, to enhance readability and maintainability, consider the following suggestions:
- Group related methods together (e.g., all block-related methods, all transaction-related methods).
- Add comments or separators to clearly delineate different groups of methods.
- Consider splitting this large file into smaller, more focused files based on method categories.
Example structure:
// Block-related methods [ { "eth_chainId": ... }, { "eth_blockNumber": ... }, { "eth_getBlockByNumber": ... } ], // Transaction-related methods [ { "eth_getTransactionReceipt": ... }, { "debug_traceTransaction": ... } ]This organization will make it easier to navigate and maintain the file as more methods are added in the future.
erpc/networks_test.go (7)
Line range hint
1-5451
: Consider refactoring the TestNetwork_Forward function for improved readability and maintainability.The TestNetwork_Forward function is comprehensive and covers many scenarios, which is excellent. However, its large size may impact readability and maintainability. Consider the following suggestions:
- Split the large function into smaller, more focused test functions for each scenario.
- Use a consistent naming convention for all subtests to improve clarity.
- Consider adding more edge cases, such as:
- Testing with malformed JSON-RPC requests
- Verifying behavior with extremely large response payloads
- Simulating network partitions or intermittent connectivity issues
Here's an example of how you could refactor one of the subtests:
func TestNetwork_Forward_RateLimitedOnNetworkLevel(t *testing.T) { // Setup code... t.Run("ExceedsRateLimit", func(t *testing.T) { // Test code for exceeding rate limit }) t.Run("WithinRateLimit", func(t *testing.T) { // Test code for staying within rate limit }) }This structure would make it easier to add new test cases and maintain existing ones.
Line range hint
5453-5770
: Enhance TestNetwork_InFlightRequests with additional concurrency testsThe TestNetwork_InFlightRequests function provides good coverage for concurrent request handling. To further improve it, consider the following suggestions:
- Add a test case for handling a large number of concurrent requests (e.g., 1000+) to stress test the system.
- Include a test for race conditions by intentionally introducing delays in critical sections of the code.
- Test the behavior when rapidly adding and removing in-flight requests.
- Verify the system's performance under high concurrency with varying response times from the upstream.
Here's an example of how you could add a stress test:
t.Run("StressTestWithHighConcurrency", func(t *testing.T) { const numRequests = 1000 var wg sync.WaitGroup for i := 0; i < numRequests; i++ { wg.Add(1) go func() { defer wg.Done() req := common.NewNormalizedRequest(requestBytes) _, err := network.Forward(context.Background(), req) assert.NoError(t, err) }() } wg.Wait() // Assert that all requests were processed correctly // Check for any lingering in-flight requests })This will help ensure that the system can handle a high volume of concurrent requests without issues.
Line range hint
5828-5886
: Enhance setupTestNetwork for improved flexibility and error handlingThe setupTestNetwork function is a crucial helper for test setup. To make it more robust and flexible, consider the following improvements:
- Add more configurable parameters, allowing tests to customize the setup as needed.
- Implement better error handling and propagation.
- Consider using a builder pattern for more flexible configuration.
Here's an example of how you could refactor the function:
func setupTestNetwork(t *testing.T, opts ...TestNetworkOption) *Network { t.Helper() config := defaultTestNetworkConfig() for _, opt := range opts { opt(config) } setupMocksForEvmStatePoller() rateLimitersRegistry, err := upstream.NewRateLimitersRegistry(config.RateLimiterConfig, &log.Logger) if err != nil { t.Fatalf("Failed to create rate limiters registry: %v", err) } // ... (rest of the setup code) return network } type TestNetworkConfig struct { RateLimiterConfig *common.RateLimiterConfig UpstreamConfig *common.UpstreamConfig // ... other configurable fields } type TestNetworkOption func(*TestNetworkConfig) func WithRateLimiterConfig(cfg *common.RateLimiterConfig) TestNetworkOption { return func(c *TestNetworkConfig) { c.RateLimiterConfig = cfg } } // ... other option functionsThis approach allows for more flexible and maintainable test setups.
Line range hint
5888-5909
: Improve setupMocksForEvmStatePoller for better flexibility and error handlingThe setupMocksForEvmStatePoller function is crucial for mocking EVM state polling. To enhance its functionality and reliability, consider the following suggestions:
- Add parameters to allow customization of mock responses.
- Implement error handling for gock setup failures.
- Consider adding validation for the mock setup.
Here's an example of how you could refactor the function:
func setupMocksForEvmStatePoller(opts ...MockOption) error { resetGock() config := defaultMockConfig() for _, opt := range opts { opt(config) } for _, endpoint := range config.Endpoints { mock := gock.New(endpoint). Post(""). Persist(). Filter(func(request *http.Request) bool { return strings.Contains(safeReadBody(request), "eth_getBlockByNumber") }). Reply(200). JSON([]byte(fmt.Sprintf(`{"result": {"number":"%s"}}`, config.BlockNumber))) if !mock.IsDone() { return fmt.Errorf("failed to set up mock for endpoint: %s", endpoint) } } return nil } type MockConfig struct { Endpoints []string BlockNumber string } type MockOption func(*MockConfig) func WithEndpoints(endpoints ...string) MockOption { return func(c *MockConfig) { c.Endpoints = endpoints } } func WithBlockNumber(blockNumber string) MockOption { return func(c *MockConfig) { c.BlockNumber = blockNumber } }This approach provides more flexibility in setting up mocks and includes basic error handling.
Line range hint
5911-5914
: Enhance anyTestMocksLeft for better flexibility and informativenessThe anyTestMocksLeft function serves a simple but important purpose. To make it more useful and flexible, consider the following improvements:
- Allow customization of the number of expected persisted mocks.
- Return more detailed information about remaining mocks.
- Add logging for debugging purposes.
Here's an example of how you could refactor the function:
func anyTestMocksLeft(expectedPersistedMocks int) (int, []string) { remaining := len(gock.Pending()) - expectedPersistedMocks var details []string if remaining > 0 { for _, mock := range gock.Pending() { details = append(details, fmt.Sprintf("Mock: %s %s", mock.Request().Method, mock.Request().URLStruct.String())) } } return remaining, details } // Usage remainingMocks, details := anyTestMocksLeft(2) if remainingMocks > 0 { t.Errorf("Expected all mocks to be consumed, got %d left. Details: %v", remainingMocks, details) }This version provides more information about remaining mocks and allows for customization of the expected number of persisted mocks.
Line range hint
5921-5929
: Improve error handling in safeReadBody functionThe safeReadBody function is well-implemented for safely reading an HTTP request body. However, it could benefit from improved error handling. Consider the following suggestions:
- Return the error along with the body string to allow the caller to handle potential read errors.
- Log the error (if logging is available in this context) before returning an empty string.
Here's an example of how you could refactor the function:
func safeReadBody(request *http.Request) (string, error) { body, err := io.ReadAll(request.Body) if err != nil { // Log the error if a logger is available // log.Printf("Error reading request body: %v", err) return "", err } request.Body = io.NopCloser(bytes.NewBuffer(body)) return string(body), nil } // Usage body, err := safeReadBody(request) if err != nil { // Handle the error appropriately return } // Use the bodyThis approach allows for more robust error handling and gives the caller more control over how to handle read errors.
Line range hint
1-5929
: Overall assessment: Comprehensive test suite with room for improvementsThe erpc/networks_test.go file provides a robust set of tests for the Network struct and its methods. The tests cover a wide range of scenarios, including error handling, concurrent requests, and various edge cases. The use of table-driven tests and mock HTTP responses is commendable.
Main strengths:
- Comprehensive coverage of different scenarios
- Good use of subtests for organizing different test cases
- Effective use of mocking for HTTP responses
Areas for improvement:
- Refactor large test functions (e.g., TestNetwork_Forward) into smaller, more focused tests for better readability and maintainability
- Enhance error handling in helper functions like setupTestNetwork and safeReadBody
- Increase flexibility of helper functions by allowing more customization through parameters
- Add more stress tests and race condition checks for concurrent operations
- Consider adding property-based testing for more thorough edge case coverage
By addressing these points, the test suite can become even more robust and maintainable, ensuring the reliability of the Network implementation as the project evolves.
util/reader.go (1)
9-9
: Usebytes.NewBufferSize
for clearer buffer initializationInstead of using
bytes.NewBuffer(make([]byte, 0, 16*1024))
, you can utilizebytes.NewBufferSize(16 * 1024)
for more readable and idiomatic code.Apply this diff to make the change:
- buf := bytes.NewBuffer(make([]byte, 0, 16*1024)) // 16KB default + buf := bytes.NewBufferSize(16 * 1024) // 16KB defaultvendors/drpc.go (2)
Line range hint
57-64
: Add explicit error code check alongside message contentRelying solely on the error message content ("token is invalid") without checking the error code may lead to incorrect error handling if the message changes or is localized. Including an explicit error code check (e.g.,
code == 4
) enhances robustness.Apply this diff to include the error code check:
if code := err.Code; code != 0 { msg := err.Message if err.Data != "" { details["data"] = err.Data } - if strings.Contains(msg, "token is invalid") { + if code == 4 && strings.Contains(msg, "token is invalid") { return common.NewErrEndpointUnauthorized( common.NewErrJsonRpcExceptionInternal( code, common.JsonRpcErrorUnauthorized, msg, nil, details, ), )
67-76
: Ensure that treating missing methods as client-side exceptions doesn't mask server-side issuesWhile classifying errors containing "does not exist/is not available" as client-side exceptions prevents triggering the circuit breaker, it may inadvertently hide genuine server-side problems. Consider implementing additional monitoring or logging to detect and address critical server-side issues appropriately.
common/evm_json_rpc.go (1)
Line range hint
8-91
: Consider adding error logging for failed normalizationsCurrently, when normalization fails (e.g.,
NormalizeHex
returns an error), the errors are silently ignored. Logging these errors can aid in diagnosing issues and understanding why certain parameters are not being normalized as expected.erpc/networks.go (2)
79-79
: Verify the logging format forreq.Id()
.Logging the request ID as an integer using
Int64("id", req.Id())
is acceptable ifreq.Id()
returns anint64
. Ensure that this format aligns with your logging standards and that downstream log processors handle the integer ID correctly.
366-369
: Remove unnecessary nil check on the receivern
.In the method
EvmIsBlockFinalized
, checking ifn == nil
is redundant because methods with a pointer receiver cannot be called on anil
receiver. Removing this check simplifies the code without affecting functionality.common/json_rpc.go (1)
601-608
: Consider using the correct JSON-RPC error code for request parsing errorsIn the
TranslateToJsonRpcException
function, errors with codeErrCodeJsonRpcRequestUnmarshal
are translated toJsonRpcErrorParseException
(-32700).According to the JSON-RPC specification, the
Parse error
(-32700) is reserved for invalid JSON. Errors parsing the request object (but valid JSON) should useInvalid Request
(-32600).Consider changing the error code to
JsonRpcErrorInvalidRequest
to more accurately reflect the nature of the error.erpc/http_server.go (2)
Line range hint
152-156
: Avoid exposing stack traces in client-facing error messagesIncluding stack traces in error messages sent to clients can reveal sensitive information about the application's internals. This practice can aid attackers in identifying potential weaknesses.
Apply this diff to prevent the stack trace from being sent to the client:
- fastCtx.SetBodyString(fmt.Sprintf(`{"jsonrpc":"2.0","error":{"code":-32603,"message":"%s"}}`, msg)) + fastCtx.SetBodyString(`{"jsonrpc":"2.0","error":{"code":-32603,"message":"Internal server error"}}`)Ensure that detailed errors and stack traces are logged on the server side instead.
307-307
: Review the use ofpath.Clean
on request pathsUsing
path.Clean
on the request path may modify the URL in unintended ways, such as removing duplicate slashes or resolving dot segments, which could affect routing logic or introduce security issues.Ensure that using
path.Clean
is appropriate for your URL parsing needs. If the intention is to normalize the path, consider whether additional validation is necessary to prevent malicious path manipulation.common/config.go (1)
14-15
: Consider using constants for version information.Since
ErpcVersion
andErpcCommitSha
are not expected to change at runtime, consider defining them as constants instead of variables. This clarifies their intent and ensures they remain immutable.Apply this diff to define them as constants:
var ( - ErpcVersion = "dev" - ErpcCommitSha = "none" TRUE = true FALSE = false ) +const ( + ErpcVersion = "dev" + ErpcCommitSha = "none" +)Note: This suggestion follows after removing the
TRUE
andFALSE
variables as mentioned below.upstream/upstream.go (3)
375-376
: Address the TODO comment for per-network or per-method executorAt lines 375-376, there's a TODO comment about extending the executor to be per-network or per-method to handle upstream performance differences or user-specific policies.
Would you like assistance in implementing this enhancement or opening a GitHub issue to track it?
439-439
: Address the TODO comment regarding per-network method ignoringAt line 439, there's a TODO comment suggesting that method ignoring should be per-network rather than global, as some upstreams support multiple networks.
Would you like assistance in implementing this change or creating a GitHub issue to track this improvement?
Line range hint
462-477
: Handle error fromGetBudget
ininitRateLimitAutoTuner
In the
initRateLimitAutoTuner
method, ifGetBudget
returns an error, it is currently ignored. To aid in debugging, consider logging the error to notify when the budget retrieval fails.Apply this diff to log the error:
if err == nil { dur, err := time.ParseDuration(cfg.AdjustmentPeriod) if err != nil { u.Logger.Error().Err(err).Msgf("failed to parse rate limit auto-tune adjustment period: %s", cfg.AdjustmentPeriod) return } u.rateLimiterAutoTuner = NewRateLimitAutoTuner( &u.Logger, budget, dur, cfg.ErrorRateThreshold, cfg.IncreaseFactor, cfg.DecreaseFactor, cfg.MinBudget, cfg.MaxBudget, ) + } else { + u.Logger.Error().Err(err).Msgf("failed to get budget for rate limit auto-tuning: %s", u.config.RateLimitBudget) }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (1)
go.sum
is excluded by!**/*.sum
📒 Files selected for processing (75)
- .github/workflows/prune.yml (1 hunks)
- .github/workflows/release.yml (5 hunks)
- .github/workflows/test.yml (1 hunks)
- .gitignore (1 hunks)
- .vscode/launch.json (1 hunks)
- Dockerfile (1 hunks)
- Makefile (1 hunks)
- README.md (1 hunks)
- auth/http.go (6 hunks)
- cmd/erpc/init_test.go (1 hunks)
- cmd/erpc/main.go (1 hunks)
- cmd/erpc/main_test.go (1 hunks)
- common/config.go (4 hunks)
- common/errors.go (14 hunks)
- common/evm.go (1 hunks)
- common/evm_block_ref.go (5 hunks)
- common/evm_block_ref_test.go (3 hunks)
- common/evm_json_rpc.go (2 hunks)
- common/init_test.go (1 hunks)
- common/json_rpc.go (4 hunks)
- common/request.go (11 hunks)
- common/response.go (6 hunks)
- common/sonic.go (1 hunks)
- common/utils.go (3 hunks)
- data/memory.go (0 hunks)
- data/mock_connector.go (2 hunks)
- data/value.go (0 hunks)
- docs/pages/config/database.mdx (2 hunks)
- docs/pages/config/example.mdx (1 hunks)
- docs/pages/config/rate-limiters.mdx (1 hunks)
- docs/pages/deployment/railway.mdx (1 hunks)
- docs/pages/operation/batch.mdx (1 hunks)
- docs/pages/operation/production.mdx (1 hunks)
- docs/pages/operation/url.mdx (1 hunks)
- erpc/erpc.go (1 hunks)
- erpc/erpc_test.go (3 hunks)
- erpc/evm_json_rpc_cache.go (7 hunks)
- erpc/evm_json_rpc_cache_test.go (13 hunks)
- erpc/healthcheck.go (1 hunks)
- erpc/http_server.go (15 hunks)
- erpc/http_server_test.go (7 hunks)
- erpc/init.go (1 hunks)
- erpc/init_test.go (1 hunks)
- erpc/multiplexer.go (2 hunks)
- erpc/networks.go (11 hunks)
- erpc/networks_registry.go (1 hunks)
- erpc/networks_test.go (80 hunks)
- erpc/projects.go (3 hunks)
- erpc/projects_registry.go (1 hunks)
- go.mod (2 hunks)
- test/cmd/main.go (1 hunks)
- test/fake_server.go (4 hunks)
- test/init_test.go (1 hunks)
- test/k6/run.js (3 hunks)
- test/samples/evm-json-rpc.json (1 hunks)
- upstream/alchemy_http_json_rpc_client.go (1 hunks)
- upstream/envio_http_json_rpc_client.go (1 hunks)
- upstream/evm_state_poller.go (3 hunks)
- upstream/failsafe.go (7 hunks)
- upstream/http_json_json_client_test.go (5 hunks)
- upstream/http_json_rpc_client.go (22 hunks)
- upstream/init_test.go (1 hunks)
- upstream/pimlico_http_json_rpc_client.go (1 hunks)
- upstream/registry.go (2 hunks)
- upstream/registry_test.go (0 hunks)
- upstream/request_test.go (1 hunks)
- upstream/thirdweb_http_json_rpc_client.go (1 hunks)
- upstream/upstream.go (4 hunks)
- util/fastmem.go (1 hunks)
- util/headers.go (2 hunks)
- util/json_rpc.go (1 hunks)
- util/reader.go (1 hunks)
- util/redact.go (1 hunks)
- vendors/drpc.go (2 hunks)
- vendors/quicknode.go (1 hunks)
💤 Files with no reviewable changes (3)
- data/memory.go
- data/value.go
- upstream/registry_test.go
🚧 Files skipped from review as they are similar to previous changes (38)
- .gitignore
- .vscode/launch.json
- Dockerfile
- Makefile
- README.md
- auth/http.go
- cmd/erpc/init_test.go
- cmd/erpc/main.go
- common/evm.go
- common/init_test.go
- common/sonic.go
- common/utils.go
- docs/pages/config/database.mdx
- docs/pages/operation/url.mdx
- erpc/erpc.go
- erpc/erpc_test.go
- erpc/evm_json_rpc_cache_test.go
- erpc/healthcheck.go
- erpc/init.go
- erpc/init_test.go
- erpc/multiplexer.go
- erpc/networks_registry.go
- erpc/projects.go
- erpc/projects_registry.go
- go.mod
- test/cmd/main.go
- test/init_test.go
- upstream/alchemy_http_json_rpc_client.go
- upstream/envio_http_json_rpc_client.go
- upstream/evm_state_poller.go
- upstream/failsafe.go
- upstream/init_test.go
- upstream/pimlico_http_json_rpc_client.go
- upstream/registry.go
- upstream/request_test.go
- upstream/thirdweb_http_json_rpc_client.go
- util/fastmem.go
- util/json_rpc.go
🧰 Additional context used
🪛 actionlint
.github/workflows/prune.yml
25-25: shellcheck reported issue in this script: SC2086:info:2:36: Double quote to prevent globbing and word splitting
(shellcheck)
.github/workflows/release.yml
101-101: shellcheck reported issue in this script: SC2086:info:1:51: Double quote to prevent globbing and word splitting
(shellcheck)
🪛 GitHub Check: build
common/json_rpc.go
[failure] 18-18:
log redeclared in this block
[failure] 9-9:
other declaration of log
[failure] 151-151:
undefined: log.Error
[failure] 477-477:
method JsonRpcResponse.MarshalZerologObject already declared at common/json_rpc.go:294:27
[failure] 482-482:
r.RLock undefined (type *JsonRpcResponse has no field or method RLock)
[failure] 483-483:
r.RUnlock undefined (type *JsonRpcResponse has no field or method RUnlock)
🪛 LanguageTool
docs/pages/config/rate-limiters.mdx
[grammar] ~71-~71: A noun may be missing here.
Context: ...e auto-tuner is enabled by default when an upstream has any rate limit budget defined. Here...(DT_JJ_NO_NOUN)
docs/pages/deployment/railway.mdx
[uncategorized] ~35-~35: A comma might be missing here.
Context: ...ge in your services To reduce cost and overhead use private network (`.railway.internal...(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
[uncategorized] ~35-~35: You might be missing the article “a” here.
Context: ...rvices To reduce cost and overhead use private network (.railway.internal
) to connec...(AI_EN_LECTOR_MISSING_DETERMINER_A)
docs/pages/operation/production.mdx
[uncategorized] ~6-~6: A determiner appears to be missing. Consider inserting it.
Context: ...g eRPC in production. ## Memory usage Biggest memory usage contributor in eRPC is siz...(AI_EN_LECTOR_MISSING_DETERMINER)
[uncategorized] ~7-~7: You might be missing the article “the” here.
Context: ...est memory usage contributor in eRPC is size of responses of your requests. For exam...(AI_EN_LECTOR_MISSING_DETERMINER_THE)
[grammar] ~9-~9: An article may be missing.
Context: ...igh RPS of large request/responses. In majority of use-cases eRPC uses around 256MB of mem...(BUNCH_OF)
[uncategorized] ~27-~27: A determiner appears to be missing. Consider inserting it.
Context: ...m, network-level retry is still useful. Recommendation is to configuremaxCount
to be equal ...(AI_EN_LECTOR_MISSING_DETERMINER)
[uncategorized] ~30-~30: A comma might be missing here.
Context: ...ed response time for your use-case, for example when using "trace" methods on EVM chain...(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
[uncategorized] ~30-~30: A comma may be missing after the conjunctive/linking adverb ‘Therefore’.
Context: ...might take up to 10 seconds to respond. Therefore a low timeout might ultimately always f...(SENT_START_CONJUNCTIVE_LINKING_ADVERB_COMMA)
[uncategorized] ~32-~32: The hyphen in highly-recommended is redundant.
Context: ...cy](/config/failsafe#hedge-policy) is highly-recommended if you prefer "fast response as soon ...(ADVERB_LY_HYPHEN_FIX)
[typographical] ~32-~32: After the expression ‘for example’ a comma is usually used.
Context: ...fast response as soon as possible". For example setting500ms
as "delay" will make su...(COMMA_FOR_EXAMPLE)
[uncategorized] ~36-~36: Possible missing comma found.
Context: ...equires high storage for read-heavy use-cases such as indexing 100m blocks on Arbitru...(AI_HYDRA_LEO_MISSING_COMMA)
[uncategorized] ~36-~36: You might be missing the article “the” here.
Context: ...wards cache database issues, so even if database is completely down it will not impact t...(AI_EN_LECTOR_MISSING_DETERMINER_THE)
🔇 Additional comments (83)
util/headers.go (3)
6-6
: LGTM: Import statement added correctly.The addition of the
net/http
import is necessary and correctly implemented to support the updated function signature.
Line range hint
1-24
: Overall, the changes toExtractUsefulHeaders
are well-implemented.The function has been successfully updated to work with
*http.Response
instead ofhttp.Header
, improving its flexibility while maintaining its core functionality. The changes are consistent and follow good practices. Consider the suggested minor optimization for improved efficiency.
9-9
: Function signature update looks good, but verify usage.The change from
http.Header
to*http.Response
is a good improvement, providing more flexibility and access to the full response. However, ensure that all calls to this function across the codebase have been updated accordingly.Run the following script to verify the function usage:
✅ Verification successful
Function signature update verified successfully.
All usages of
ExtractUsefulHeaders
match the updated signature, ensuring compatibility across the codebase.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify all function calls to `ExtractUsefulHeaders` match the new signature. # Test: Search for the function usage. Expect: Only occurrences of the new signature. rg -A 5 $'ExtractUsefulHeaders'Length of output: 711
Script:
#!/bin/bash # Description: Verify the type of 'r' where 'ExtractUsefulHeaders' is called in upstream/http_json_rpc_client.go. # Use ast-grep to find the function where ExtractUsefulHeaders is called and extract the type of 'r' ast-grep --pattern ' function $_($_) { $$$ details["headers"] = util.ExtractUsefulHeaders(r) $$$ } ' upstream/http_json_rpc_client.goLength of output: 141
Script:
#!/bin/bash # Description: Find the function or method where ExtractUsefulHeaders is called and determine the type of 'r'. # Use ast-grep to locate the call and extract the surrounding function/method definition ast-grep --pattern ' function $_($_) { $$$ details["headers"] = util.ExtractUsefulHeaders(r) $$$ } ' upstream/http_json_rpc_client.go # If ast-grep fails, use ripgrep to extract surrounding lines for manual inspection rg -C 5 'ExtractUsefulHeaders\(r\)' upstream/http_json_rpc_client.goLength of output: 519
.github/workflows/test.yml (1)
Line range hint
22-25
: Excellent addition of the Gosec Security Scanner!The inclusion of the Gosec Security Scanner is a valuable improvement to the workflow. It enhances the project's security posture by identifying potential vulnerabilities early in the development process. The arguments to exclude the test directory and skip tests during the scan are appropriate, as test files often contain intentionally vulnerable code.
util/redact.go (1)
13-13
: Verify the safety and performance impact of usingStr2Mem
.The change from
[]byte(endpoint)
toStr2Mem(endpoint)
likely aims to improve performance by avoiding a new byte slice allocation. However, this comes with potential safety concerns:
- The
Str2Mem
function, as described in the AI summary, uses unsafe pointers. This could lead to undefined behavior if not handled correctly.- The performance gain might be negligible for short strings, and the added complexity might not be justified.
Please run the following checks:
Consider adding benchmark tests if they don't exist to quantify the performance improvement.
.github/workflows/prune.yml (4)
1-9
: LGTM: Well-structured workflow triggersThe workflow name is concise and descriptive. The combination of manual, scheduled, and push-based triggers provides excellent flexibility and automation for the pruning process.
12-14
: LGTM: Appropriate job configurationThe job name "clean" clearly reflects its purpose, and using the latest Ubuntu ensures access to up-to-date dependencies.
16-21
: LGTM: Secure and well-configured login stepThe login step is well-configured:
- It uses a pinned version of the action for reproducibility.
- Authentication is handled securely using repository secrets.
- The step name clearly describes its purpose.
29-38
: LGTM: Well-configured pruning step with dry-run modeThe pruning step is well-configured with specific parameters for the retention policy. It's good to see that dry-run mode is enabled (line 38), which allows for testing the workflow without making actual deletions.
Consider reviewing the results of the dry run and, when you're confident in the behavior, switch to actual pruning by setting
dry-run: false
.To verify the impact of the pruning policy, you can run the following script after the workflow has been executed:
This script will help you understand which images would be affected by the pruning policy.
docs/pages/deployment/railway.mdx (2)
55-55
: Improved example URL in curl command.The change from a specific production URL to a placeholder (
https://erpc-xxxxxxx.up.railway.app
) is a good practice. This makes the example more generic and applicable to all users, preventing confusion that might arise from using a specific, potentially outdated URL.
Line range hint
1-85
: Overall documentation improvements enhance user experience.The changes to this file significantly improve the deployment instructions for eRPC on Railway. The addition of the "Usage in your services" section with a practical code example, along with the generalization of the example URL in the curl command, make the documentation more comprehensive and user-friendly.
These updates align well with the broader changes in the repository, such as enhanced error handling and logging. The improved documentation will help users better understand how to deploy and integrate eRPC services, potentially reducing implementation issues and support requests.
🧰 Tools
🪛 LanguageTool
[uncategorized] ~35-~35: A comma might be missing here.
Context: ...ge in your services To reduce cost and overhead use private network (`.railway.internal...(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
[uncategorized] ~35-~35: You might be missing the article “a” here.
Context: ...rvices To reduce cost and overhead use private network (.railway.internal
) to connec...(AI_EN_LECTOR_MISSING_DETERMINER_A)
data/mock_connector.go (7)
5-8
: LGTM: New imports are appropriate for the added functionality.The addition of the "time" and "github.com/erpc/erpc/common" imports aligns with the new features implemented in this file.
Line range hint
34-45
: LGTM: New MockConnector methods correctly implement TTL functionality.The addition of
SetTTL
andHasTTL
methods to the MockConnector struct properly extends its mocking capabilities to include TTL-related operations. The implementation correctly uses the mock.Mock embedded struct.
52-56
: LGTM: MockMemoryConnector struct is well-designed.The new MockMemoryConnector struct effectively extends MemoryConnector with a fake delay feature. This design allows for more realistic simulation of network latency in tests, which is valuable for thorough testing.
58-69
: LGTM: NewMockMemoryConnector constructor is well-implemented.The constructor properly initializes the MockMemoryConnector, handling potential errors from the base NewMemoryConnector and setting up the fake delay. This implementation ensures that MockMemoryConnector is correctly initialized and ready for use in tests.
71-81
: LGTM: Set and Get method overrides correctly implement fake delay.The overridden Set and Get methods effectively incorporate the fake delay using time.Sleep before calling the base methods. This implementation allows for accurate simulation of network latency in tests, which is crucial for testing timing-sensitive operations.
83-86
: LGTM: SetFakeDelay method enhances testing flexibility.The addition of the SetFakeDelay method allows for dynamic adjustment of the simulated network latency. This feature significantly enhances the flexibility of the MockMemoryConnector in various testing scenarios, enabling more comprehensive and realistic tests.
Line range hint
1-86
: Overall assessment: Excellent enhancements to mocking capabilities.The changes to this file significantly improve the mocking capabilities of the connector implementations. The introduction of TTL-related methods in MockConnector and the new MockMemoryConnector with delay simulation provide valuable tools for more comprehensive and realistic testing scenarios. These additions will likely contribute to more robust and reliable tests across the project.
docs/pages/operation/production.mdx (1)
Line range hint
1-41
: Overall positive changes with room for minor improvementsThe updates to this documentation file are valuable, particularly the new section on memory usage. These changes provide important guidance for running eRPC in production environments. The minor revisions to existing sections also contribute to improved clarity.
To further enhance the quality of this documentation:
- Implement the suggested grammatical and style improvements.
- Consider adding more specific examples or benchmarks in the memory usage section to help users better understand the impact of different request types on memory consumption.
- Review the entire document for consistency in formatting and language use.
These improvements will ensure that the production guidelines are as clear, accurate, and helpful as possible for users deploying eRPC in production environments.
🧰 Tools
🪛 LanguageTool
[uncategorized] ~6-~6: A determiner appears to be missing. Consider inserting it.
Context: ...g eRPC in production. ## Memory usage Biggest memory usage contributor in eRPC is siz...(AI_EN_LECTOR_MISSING_DETERMINER)
[uncategorized] ~7-~7: You might be missing the article “the” here.
Context: ...est memory usage contributor in eRPC is size of responses of your requests. For exam...(AI_EN_LECTOR_MISSING_DETERMINER_THE)
[grammar] ~9-~9: An article may be missing.
Context: ...igh RPS of large request/responses. In majority of use-cases eRPC uses around 256MB of mem...(BUNCH_OF)
[uncategorized] ~27-~27: A determiner appears to be missing. Consider inserting it.
Context: ...m, network-level retry is still useful. Recommendation is to configuremaxCount
to be equal ...(AI_EN_LECTOR_MISSING_DETERMINER)
[uncategorized] ~30-~30: A comma might be missing here.
Context: ...ed response time for your use-case, for example when using "trace" methods on EVM chain...(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
[uncategorized] ~30-~30: A comma may be missing after the conjunctive/linking adverb ‘Therefore’.
Context: ...might take up to 10 seconds to respond. Therefore a low timeout might ultimately always f...(SENT_START_CONJUNCTIVE_LINKING_ADVERB_COMMA)
[uncategorized] ~32-~32: The hyphen in highly-recommended is redundant.
Context: ...cy](/config/failsafe#hedge-policy) is highly-recommended if you prefer "fast response as soon ...(ADVERB_LY_HYPHEN_FIX)
[typographical] ~32-~32: After the expression ‘for example’ a comma is usually used.
Context: ...fast response as soon as possible". For example setting500ms
as "delay" will make su...(COMMA_FOR_EXAMPLE)
[uncategorized] ~36-~36: Possible missing comma found.
Context: ...equires high storage for read-heavy use-cases such as indexing 100m blocks on Arbitru...(AI_HYDRA_LEO_MISSING_COMMA)
[uncategorized] ~36-~36: You might be missing the article “the” here.
Context: ...wards cache database issues, so even if database is completely down it will not impact t...(AI_EN_LECTOR_MISSING_DETERMINER_THE)
[uncategorized] ~38-~38: A comma might be missing here.
Context: ...scribed in Database section depending on your requirements choose t...(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
[uncategorized] ~38-~38: A comma might be missing here.
Context: ...fig/database) section depending on your requirements choose the right type. You can start wi...(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
.github/workflows/release.yml (1)
13-15
: LGTM: Added automatic trigger for main branch pushesThe addition of a push trigger for the main branch is a good practice for continuous deployment. This ensures that the release workflow runs automatically when changes are merged into the main branch, maintaining an up-to-date release pipeline.
docs/pages/config/rate-limiters.mdx (3)
64-70
: LGTM: Clear and informative introduction to the auto-tuner feature.The new section effectively introduces the auto-tuner feature, explaining its purpose and providing relevant use cases. This addition enhances the documentation by clearly communicating when and why users might want to utilize this feature.
71-89
: LGTM: Comprehensive configuration example with clear explanations.The provided YAML configuration example is well-structured and includes helpful comments for each parameter. The additional explanation for
minBudget
is particularly valuable, as it guides users towards proper configuration to ensure the auto-tuner functions effectively.🧰 Tools
🪛 LanguageTool
[grammar] ~71-~71: A noun may be missing here.
Context: ...e auto-tuner is enabled by default when an upstream has any rate limit budget defined. Here...(DT_JJ_NO_NOUN)
64-106
: Excellent addition to the rate limiters documentation.The new auto-tuner section is a valuable addition to the documentation. It provides clear explanations, practical examples, and important configuration details. These changes will greatly assist users in understanding and implementing the auto-tuner feature effectively.
🧰 Tools
🪛 LanguageTool
[grammar] ~71-~71: A noun may be missing here.
Context: ...e auto-tuner is enabled by default when an upstream has any rate limit budget defined. Here...(DT_JJ_NO_NOUN)
common/response.go (4)
14-26
: Excellent improvements to thread safety and memory efficiency!The changes to the
NormalizedResponse
struct fields are well-thought-out:
- Changing
body
toio.ReadCloser
allows for efficient streaming of large responses.- Adding
expectedSize
helps with memory allocation and progress tracking.- Using atomic types for
jsonRpcResponse
andevmBlockNumber
ensures thread-safe access in concurrent scenarios.These modifications will significantly improve the robustness and performance of the code.
130-138
: Appropriate updates to setter methodsThe changes to
WithBody
and the addition ofWithExpectedSize
are consistent with the struct field modifications:
WithBody
now correctly accepts anio.ReadCloser
, aligning with the newbody
field type.WithExpectedSize
allows setting the newexpectedSize
field.Both methods follow the builder pattern, maintaining consistency with the existing codebase.
Line range hint
285-312
: Excellent improvements to thread safety and request handlingThe changes to
CopyResponseForRequest
function are well-implemented:
- Use of atomic operations prevents race conditions when accessing
jsonRpcResponse
.- Cloning the JSON-RPC response ensures that the copy is independent of the original, preventing unintended side effects.
- Setting the ID from the request correctly handles multiplexed requests where only one actual request was sent to the upstream.
These modifications significantly improve the robustness of the function, especially in concurrent scenarios and when dealing with multiplexed requests.
Line range hint
1-312
: Overall excellent improvements with minor suggestionsThe changes to
common/response.go
represent a significant enhancement to the codebase:
- Improved thread safety through the use of atomic operations and proper locking.
- Enhanced memory efficiency, particularly in handling response bodies.
- Better resource management with the addition of the
Release
method.These modifications greatly improve the robustness and performance of the code, especially in concurrent scenarios.
A few minor issues were identified:
- An orphaned
RUnlock()
in theEvmBlockNumber
method.- Potential for lock optimization in the
IsObjectNull
method.- Minor improvements possible in error handling and logging.
Addressing these minor points will further refine the already substantial improvements made to this file.
test/fake_server.go (4)
136-156
: LGTM: Well-implemented batch request handling.The
handleBatchRequest
method is a solid implementation of batch JSON-RPC request processing. It correctly handles empty batch requests and processes each request individually, aggregating the responses as per the JSON-RPC 2.0 specification.
238-250
: LGTM: Centralized error response handling.The
sendErrorResponse
method is a good addition that centralizes error response logic. It promotes code reuse and ensures consistency in error handling across the server. The thread-safe increment of therequestsFailed
counter is also correctly implemented.
Line range hint
300-304
: LGTM: Added method for tracking successful requests.The addition of the
RequestsSuccess
method provides a valuable way to track successful requests. This is useful for testing and monitoring purposes. The method correctly uses a mutex for thread-safe access to the counter.
Line range hint
1-304
: Overall: Significant improvements to FakeServer functionality and robustness.The changes made to
test/fake_server.go
greatly enhance the FakeServer's capabilities:
- Improved handling of both single and batch JSON-RPC requests.
- Added rate limiting simulation.
- Enhanced error handling and response generation.
- Support for generating large fake responses for performance testing.
- Better tracking of request statistics.
These improvements make the FakeServer a more versatile and powerful tool for testing various scenarios, including edge cases and performance under load.
common/request.go (5)
15-15
: LGTM: Addition of RequestContextKey constantThe introduction of
RequestContextKey
enhances consistency in context handling across the application. This aligns well with theContextKey
type mentioned in other file changes.
Line range hint
39-42
: LGTM: Addition of UseUpstream field to RequestDirectivesThe new
UseUpstream
field enhances the flexibility of request routing by allowing specific upstream targeting, including wildcard support. This change aligns well with the new upstream handling configurations mentioned in other file changes.
266-266
: LGTM: Improved error handling in Method methodThe use of
NewErrJsonRpcRequestUnmarshal
for error handling in the JSON parsing section enhances error reporting and aligns with similar improvements in other parts of the codebase.
307-308
: LGTM: Improved thread safety in EvmBlockNumber methodThe use of atomic loading for
lastValidResponse
enhances thread safety and aligns with the overall shift towards using atomic operations in the struct. This change contributes to the improved concurrency handling in the codebase.Also applies to: 312-312
Line range hint
1-355
: Overall improvements to NormalizedRequest and associated methodsThe changes to this file significantly enhance the
NormalizedRequest
struct and its methods:
- Improved concurrency handling with atomic operations
- Enhanced error handling and JSON processing
- Added new fields and methods for increased functionality
- Consistent updates aligning with changes in other parts of the codebase
These modifications collectively improve the robustness and performance of the request handling system.
To ensure the reliability of these extensive changes, consider implementing comprehensive integration tests:
common/evm_block_ref.go (1)
50-52
: Improved block hash handling.This addition enhances the function by directly returning the block hash when it's exactly 66 characters long (including the "0x" prefix). This is a valid optimization as it avoids unnecessary conversion attempts for known block hashes.
common/evm_block_ref_test.go (6)
182-200
: LGTM: New test case foreth_call
with blockHash object.This test case is a valuable addition as it covers the scenario where a blockHash is provided instead of a block number for the
eth_call
method. The expected values are correctly set, with the reference matching the provided blockHash and the block number set to 0.
201-219
: LGTM: New test case foreth_call
with blockNumber object.This test case is a good addition as it covers the scenario where a blockNumber is provided as an object for the
eth_call
method. The expected values are correctly set, with both the reference and block number matching the provided hexadecimal value "0x1b4".
294-309
: LGTM: New test case foreth_getProof
with blockHash object.This test case is a valuable addition as it covers the scenario where a blockHash is provided instead of a block number for the
eth_getProof
method. The expected values are correctly set, with the reference matching the provided blockHash and the block number set to 0.
310-325
: LGTM: New test case foreth_getProof
with blockNumber object.This test case is a good addition as it covers the scenario where a blockNumber is provided as an object for the
eth_getProof
method. The expected values are correctly set, with both the reference and block number matching the provided hexadecimal value "0x1b4".
346-346
: LGTM: Minor type adjustment ineth_getTransactionReceipt
test case.The change from
json.RawMessage
to[]byte
for theResult
field is a minor type adjustment that doesn't affect the functionality of the test case. This change likely improves consistency with other parts of the codebase or enhances JSON handling.
Line range hint
182-346
: Overall assessment: Improved test coverage and minor refinements.The changes in this file significantly enhance the test suite for the
ExtractBlockReference
function. The new test cases foreth_call
andeth_getProof
methods cover important scenarios where block references are provided as objects (both blockHash and blockNumber). These additions ensure more robust testing of the block reference extraction logic.The minor type adjustment in the
eth_getTransactionReceipt
test case maintains consistency without affecting functionality.These changes collectively contribute to a more comprehensive and reliable test suite, which is crucial for maintaining the integrity of the block reference extraction functionality.
upstream/http_json_json_client_test.go (1)
363-363
: Improved error assertion specificity.The error assertion in the
HTMLResponseForBatchRequest
test has been updated to check for the specific "503 Service Unavailable" message. This change enhances the test's precision and better reflects the expected error in this scenario.upstream/http_json_rpc_client.go (4)
14-14
: LGTM: Improved JSON parsing with sonic/astThe addition of the
sonic/ast
package is a good choice. It suggests a move towards AST-based JSON parsing, which can potentially improve performance and provide more fine-grained control over JSON handling.
172-182
: LGTM: Enhanced logging and error handlingThe changes in the
SendRequest
method significantly improve the robustness and observability of the request handling process:
- Enhanced logging provides more detailed information about requests, including batch sizes and individual request details at the trace level.
- Improved error handling for batch requests, including better management of the batch timer.
- Use of
common.NewErrEndpointTransportFailure
for transport errors provides more specific error types.These improvements will aid in debugging and provide better error context for different failure scenarios.
Also applies to: 185-196
839-853
: LGTM: Improved handling of reverted transactionsThe addition of specific handling for the "Error(string)" prefix in transaction revert data is a valuable improvement:
- It provides a more accurate error type (
JsonRpcErrorEvmReverted
) for reverted transactions.- This change enhances the client's ability to distinguish between different types of transaction failures, which is crucial for blockchain interactions.
This improvement will lead to more precise error reporting for smart contract interactions.
Line range hint
1-878
: Overall: Significant improvements to the JSON-RPC clientThe changes made to this file represent a substantial enhancement to the JSON-RPC client:
- Introduction of AST-based JSON parsing with
sonic/ast
, improving performance and flexibility in JSON handling.- Significantly improved error handling and type inference, allowing for more precise error reporting and easier debugging.
- Enhanced logging throughout the request and response cycle, improving observability.
- Better handling of batch requests and responses, including improved error management.
- More robust handling of various error scenarios, including EVM-specific errors and reverted transactions.
These improvements collectively contribute to a more robust, efficient, and maintainable JSON-RPC client. The changes will likely result in better performance, easier debugging, and more reliable interaction with JSON-RPC services, particularly in the context of blockchain interactions.
Great work on these enhancements!
erpc/http_server_test.go (7)
38-38
: Verify the rationale for increasing the MaxTimeout.The MaxTimeout has been increased from "200ms" to "500ms". While this change may improve test stability by reducing false positives due to tight timeouts, it's important to ensure that it doesn't mask any actual timeout issues in the code being tested.
Could you provide more context on why this change was necessary? Are there any other parts of the test suite that might be affected by this increased timeout?
145-145
: Approve case-sensitive error message updates.The test cases have been updated to expect lowercase "timeout" in error messages instead of "Timeout". This change improves consistency in error message formatting.
Please ensure that this change aligns with updates in the server's error handling logic. It would be beneficial to verify that all error messages consistently use lowercase formatting.
Also applies to: 164-164, 215-215
262-615
: Excellent addition of comprehensive test cases.The new TestHttpServer_SingleUpstream test case greatly enhances the test coverage for the HTTP server functionality. It includes various configuration scenarios, tests for different upstream setups, caching behaviors, and error handling.
Consider adding the following improvements:
- Add comments explaining the purpose of each configuration case for better readability.
- Consider parameterizing the test to make it easier to add new configuration cases in the future.
- Ensure that all error scenarios are covered, including network failures and malformed responses.
617-725
: Approve addition of multiple upstream test cases.The new TestHttpServer_MultipleUpstreams test case is a valuable addition that verifies the server's behavior with multiple configured upstreams. It correctly tests upstream selection via both headers and query parameters.
Consider the following enhancements:
- Add a test case for fallback behavior when the selected upstream fails.
- Include a scenario where an invalid upstream is specified to ensure proper error handling.
- Test the behavior when conflicting upstream selections are provided (e.g., different upstreams in header and query parameter).
727-775
: Approve addition of integration tests with suggestion.The new TestHttpServer_IntegrationTests is a valuable addition for verifying the server's behavior with external services like DRPC. The test for unsupported methods is particularly useful.
To improve test isolation and reliability:
- Consider using a mock server instead of the actual DRPC endpoint to avoid dependencies on external services during test runs.
- Add more test cases to cover various integration scenarios, such as successful requests and different error conditions.
- Implement a mechanism to easily switch between mock and real endpoints for local testing vs. CI environments.
777-843
: Approve addition of createServerTestFixtures helper function.The createServerTestFixtures function is a great addition that improves code reusability and simplifies test setup. It encapsulates the logic for creating a test server and provides a convenient way to send requests.
Consider the following enhancements:
- Add error handling for the case where the server fails to start.
- Implement a cleanup mechanism to ensure the server is properly shut down after tests complete.
- Consider parameterizing the function to allow for different server configurations in various test scenarios.
Line range hint
1-1048
: Overall improvement in test coverage with minor issues.The changes to this file significantly enhance the test coverage for the HTTP server functionality. New test cases for multiple upstreams, integration tests, and a helpful test fixture function have been added. These additions will improve the robustness of the test suite.
However, there are a few issues to address:
- Duplicate functions (createServerTestFixtures and TestHttpServer_MultipleUpstreams) need to be removed.
- Some test cases could benefit from additional scenarios or parameterization.
- Consider improving test isolation, especially for integration tests.
Once these issues are addressed, the test file will be in excellent shape to ensure the reliability of the HTTP server implementation.
erpc/networks_test.go (1)
Line range hint
5916-5919
: LGTM: resetGock function is concise and effectiveThe resetGock function effectively resets the gock HTTP mocking by calling both gock.Off() and gock.Clean(). This ensures a clean slate for each test, preventing interference between tests due to lingering mocks.
common/evm_json_rpc.go (1)
9-10
:⚠️ Potential issueChanging the lock from
nrq
tojrq
may introduce concurrency issuesBy locking
jrq
instead ofnrq
, there is a potential risk of concurrent access issues withnrq
elsewhere in the codebase. Ensure that all accesses tonrq
are thread-safe, or consider maintaining the lock onnrq
to prevent data races.To verify potential data races on
nrq
, run the following script:✅ Verification successful
Concurrency Safe: No Unprotected Accesses to
nrq
FoundAfter reviewing the codebase, no additional unprotected accesses to
nrq
were detected. Therefore, lockingjrq
should not introduce concurrency issues.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Search for concurrent accesses to 'nrq' without proper locking. # Test: Find all instances where 'nrq' is accessed. # Expect: All accesses to 'nrq' should be protected with locks. rg --type go 'nrq\.\w+' --json | jq -r '.data.path.text + ":" + (.data.line_number|tostring) + ": " + .data.lines.text' | grep -v 'NormalizeEvmHttpJsonRpc'Length of output: 163
erpc/evm_json_rpc_cache.go (9)
11-11
: Properly added util package importThe addition of
"github.com/erpc/erpc/util"
is necessary for the utilization of utility functions likeStr2Mem
andMem2Str
in the code.
65-67
: Ensured thread safety with read lock on rpcReqThe introduction of
rpcReq.RLock()
anddefer rpcReq.RUnlock()
ensures that concurrent read access torpcReq
is thread-safe. This is a good practice to prevent race conditions whenrpcReq
might be accessed by multiple goroutines.
104-108
: Verify type compatibility when assigning ResultReplacing
json.RawMessage(resultString)
withutil.Str2Mem(resultString)
alters how the result string is converted. Ensure thatutil.Str2Mem
returns a type compatible withJsonRpcResponse.Result
and that this change does not introduce any issues in JSON serialization or data handling.
130-130
: Improved function naming for clarityRenaming
shouldCache
toshouldCacheResponse
enhances readability by clearly indicating the function's purpose in determining whether a response should be cached.
151-162
: Optimized logging with conditional checksAdding
if lg.GetLevel() <= zerolog.DebugLevel
before logging statements prevents unnecessary string interpolation when the log level is higher than debug. This optimization improves performance by avoiding the cost of preparing log messages that won't be emitted.
171-178
: [Duplicate Comment] Optimized logging with conditional checksSimilar to the previous logging optimization, the conditional check before the debug log ensures efficient logging practices throughout the code.
183-183
: Ensure correct conversion of rpcResp.Result with util.Mem2StrWhen setting the cache with
util.Mem2Str(rpcResp.Result)
, verify that the conversion from memory to string accurately represents the JSON-RPC response result without data loss or corruption.
Line range hint
186-223
: Refactored shouldCacheResponse function improves caching logicThe updated
shouldCacheResponse
function refines the decision-making process for caching responses, incorporating additional checks for errors and synchronization states. This enhances error handling and aligns with the overall improvements in the caching mechanism.
259-259
: Simplified shouldCacheForBlock functionSimplifying
shouldCacheForBlock
to a single line improves code readability and maintainability by directly utilizingc.network.EvmIsBlockFinalized(blockNumber)
without additional wrapping logic.erpc/networks.go (1)
440-447
: Ensure proper error handling innormalizeResponse
.While setting the ID in the JSON-RPC response, if
jrr.SetID(jrq.ID)
returns an error, it should be appropriately handled. Currently, the error is returned, but ensure that this does not interrupt the flow unexpectedly and that all possible errors are considered.Run the following script to check for unhandled errors in
normalizeResponse
:✅ Verification successful
Proper error handling confirmed in
normalizeResponse
.The error returned by
jrr.SetID(jrq.ID)
is appropriately handled by returning the error, ensuring that unexpected interruptions are managed consistently.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Find occurrences of error returns that might not be properly handled. # Search for instances where errors are not checked after function calls. rg 'if\s+.*err\s*:=' -A 1 | rg -v 'if err == nil' | rg -v 'return err' -A 1Length of output: 8118
common/json_rpc.go (1)
482-483
:⚠️ Potential issueFix 'undefined: r.RLock' and 'r.RUnlock' by correcting the receiver type
At lines 482-483, the methods
r.RLock()
andr.RUnlock()
are called onJsonRpcResponse
, which does not embedsync.RWMutex
.After updating the receiver to
JsonRpcRequest
, which embedssync.RWMutex
, these methods will be available.🧰 Tools
🪛 GitHub Check: build
[failure] 482-482:
r.RLock undefined (type *JsonRpcResponse has no field or method RLock)
[failure] 483-483:
r.RUnlock undefined (type *JsonRpcResponse has no field or method RUnlock)common/config.go (3)
380-382
: LGTMThe added checks ensure that
Architecture
andEvm
are properly validated before use, enhancing the robustness of theNetworkId
method.
448-452
: LGTMThe circuit breaker configuration values are set appropriately, aligning with the updated types and improving the resilience of the upstream configurations.
506-512
: LGTMThe added validation ensures that required fields
Architecture
andEvm
are present, improving configuration validation and preventing potential runtime errors.upstream/upstream.go (3)
146-146
: Consistency in error messages: use 'json-rpc' instead of 'jsonrpc'At line 146, the error message has been updated to "failed to unmarshal json-rpc request", ensuring consistent use of 'json-rpc' throughout the codebase.
151-151
: Verify error handling afterNormalizeEvmHttpJsonRpc
At line 151, the error handling after the call to
common.NormalizeEvmHttpJsonRpc
has been removed. Please verify that this function handles errors internally or that any potential errors are adequately managed. If errors can occur, consider adding necessary error handling to prevent unexpected issues.
395-400
: Improved chain ID parsing usingSonicCfg.Unmarshal
At lines 395-400, the chain ID is now parsed using
common.SonicCfg.Unmarshal
, enhancing efficiency and reducing potential parsing errors.common/errors.go (9)
159-160
: Review theMarshalJSON
method for consistency and potential issuesThe custom
MarshalJSON
method forBaseError
handles different types ofCause
. Ensure that:
- All possible types of
Cause
are correctly handled, especially when dealing with joined errors or standard errors.- The
SonicCfg.Marshal
function properly serializes the anonymous structs without omitting any necessary fields.- There is consistent handling of the
Cause
field across different conditional branches.Consider adding unit tests for
MarshalJSON
to cover various scenarios ofCause
.Also applies to: 167-168, 175-180, 187-190
939-941
:ErrorStatusCode
method added correctly forErrUpstreamMethodIgnored
The
ErrorStatusCode
method forErrUpstreamMethodIgnored
correctly returnshttp.StatusUnsupportedMediaType
, which aligns with the intended HTTP status code for this error.
1350-1363
: Addition ofErrEndpointTransportFailure
enhances error specificityThe new error type
ErrEndpointTransportFailure
and its constructorNewErrEndpointTransportFailure
improve the granularity of error handling for transport-level failures when communicating with remote endpoints.
1453-1475
:ErrEndpointRequestTooLarge
introduced for large request handlingThe introduction of
ErrEndpointRequestTooLarge
and related types, such asTooLargeComplaint
, provides a clear way to handle scenarios where remote endpoints reject requests due to size constraints (e.g., large block ranges or too many addresses). TheErrorStatusCode
method correctly returnshttp.StatusRequestEntityTooLarge
.
Line range hint
1495-1505
: Updated JSON-RPC error codes improve standardizationThe addition of standard and blockchain-specific JSON-RPC error codes enhances consistency and interoperability. Ensure that these error codes are correctly mapped and utilized throughout the application.
1572-1572
: ChangedData
field tointerface{}
inErrJsonRpcExceptionExternal
Modifying the
Data
field type fromstring
tointerface{}
allows for more flexible error data representation, accommodating various data types that may be included in JSON-RPC error responses.
1689-1691
:IsClientError
function correctly identifies client-side errorsThe new
IsClientError
function checks if an error is a client-side error based on specific error codes. This utility can be useful when implementing conditional logic based on error types.
977-985
: Verify all usages ofNewErrUpstreamHedgeCancelled
are updated with the newcause
parameterThe
NewErrUpstreamHedgeCancelled
function now includes acause
parameter. Ensure that all calls to this constructor across the codebase are updated to pass the appropriatecause
argument to avoid compilation errors or unintended behavior.Run the following script to find and review all invocations:
✅ Verification successful
All usages of
NewErrUpstreamHedgeCancelled
have been updated with the newcause
parameter🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Find all usages of NewErrUpstreamHedgeCancelled in the codebase. # Expected: All usages should include the 'cause' parameter. rg 'NewErrUpstreamHedgeCancelled\('Length of output: 153
Line range hint
98-104
: Ensure proper handling of marshalling errors withSonicCfg.Marshal
In the
Error
method ofBaseError
, you are usingSonicCfg.Marshal
to serialize theDetails
map. Make sure thatSonicCfg
is properly initialized and thatMarshal
handles all possible types withinDetails
. Also, consider handling any potential marshalling errors (er
) to avoid silent failures.Run the following script to check the initialization and usage of
SonicCfg
:
args: -exclude-dir=test -tests=false ./... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove duplicated Gosec arguments
This line appears to be an unintentional duplication of the Gosec Security Scanner arguments. It's redundant and may cause confusion.
Please remove this line:
- args: -exclude-dir=test -tests=false ./...
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
args: -exclude-dir=test -tests=false ./... |
// const samplePayload = JSON.stringify({ | ||
// "jsonrpc": "2.0", | ||
// "method": "debug_traceTransaction", | ||
// "params": [ | ||
// "0xe6c2decd68012e0245599ddf93c232bf92884758393a502852cbf2f393e3d99c" | ||
// ] | ||
// }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider a more maintainable approach for managing alternative payloads.
While having an alternative payload readily available can be useful for testing different scenarios, keeping commented-out code in the main file may lead to maintenance issues over time.
Consider moving alternative payloads to a separate configuration file or implementing a mechanism to switch between payloads using environment variables or command-line arguments. This approach would improve maintainability and make it easier to manage multiple test scenarios.
- name: Generate short SHA | ||
id: short_sha | ||
run: echo "SHORT_SHA=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix: Use double quotes in short SHA generation
The short SHA generation step is a good addition, but there's a potential issue flagged by the static analysis tool.
To fix the shellcheck warning and prevent potential word splitting issues, use double quotes around the command substitution:
- name: Generate short SHA
id: short_sha
run: echo "SHORT_SHA=$(git rev-parse --short HEAD)" >> "$GITHUB_OUTPUT"
🧰 Tools
🪛 actionlint
101-101: shellcheck reported issue in this script: SC2086:info:1:51: Double quote to prevent globbing and word splitting
(shellcheck)
@@ -249,7 +236,7 @@ | |||
} | |||
|
|||
rpcReq := new(JsonRpcRequest) | |||
if err := sonic.Unmarshal(r.body, rpcReq); err != nil { | |||
if err := SonicCfg.Unmarshal(r.body, rpcReq); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Action Required: Add Benchmarks for JSON Unmarshalling
No existing benchmarks found for JSON unmarshalling. Please add benchmarks to compare the performance of SonicCfg.Unmarshal
with sonic.Unmarshal
.
🔗 Analysis chain
LGTM: Updated JSON unmarshalling in JsonRpcRequest method
The switch to SonicCfg.Unmarshal
aligns with similar changes in other files and potentially provides more flexibility or performance benefits.
Consider verifying the performance impact of this change:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for benchmarks related to JSON unmarshalling
# Test: Look for existing benchmarks
rg -g '*.go' 'func Benchmark.*Unmarshal'
# If no benchmarks found, suggest adding them
if [ $? -ne 0 ]; then
echo "No existing benchmarks found for JSON unmarshalling. Consider adding benchmarks to compare the performance of SonicCfg.Unmarshal with the previous sonic.Unmarshal."
fi
Length of output: 397
@@ -304,7 +282,7 @@ | |||
if r.jsonRpcRequest != nil { | |||
e.Object("jsonRpc", r.jsonRpcRequest) | |||
} else if r.body != nil { | |||
e.Str("body", string(r.body)) | |||
e.Str("body", util.Mem2Str(r.body)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Action Required: Add Benchmarks for util.Mem2Str
No existing benchmarks found for util.Mem2Str
. Please add benchmarks to compare the performance of util.Mem2Str
with the previous string conversion method.
🔗 Analysis chain
LGTM: Updated string conversion in MarshalZerologObject
The use of util.Mem2Str
for converting the body to a string aligns with the new memory manipulation functions introduced in the codebase. This change potentially improves performance and memory handling.
Consider verifying the performance impact of this change:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for benchmarks related to string conversion
# Test: Look for existing benchmarks
rg -g '*.go' 'func Benchmark.*Mem2Str'
# If no benchmarks found, suggest adding them
if [ $? -ne 0 ]; then
echo "No existing benchmarks found for string conversion. Consider adding benchmarks to compare the performance of util.Mem2Str with the previous direct conversion method."
fi
Length of output: 397
TRUE = true | ||
FALSE = false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid redefining built-in boolean constants.
Defining TRUE
and FALSE
variables is unnecessary since Go already provides the built-in boolean constants true
and false
. Redefining them can cause confusion and is not idiomatic Go.
Apply this diff to remove the unnecessary variables:
var (
ErpcVersion = "dev"
ErpcCommitSha = "none"
- TRUE = true
- FALSE = false
)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
TRUE = true | |
FALSE = false | |
var ( | |
ErpcVersion = "dev" | |
ErpcCommitSha = "none" | |
) |
var NetworkDefaultFailsafeConfig = &FailsafeConfig{ | ||
Hedge: &HedgePolicyConfig{ | ||
Delay: "200ms", | ||
MaxCount: 3, | ||
}, | ||
Retry: &RetryPolicyConfig{ | ||
MaxAttempts: 3, | ||
Delay: "1s", | ||
Jitter: "500ms", | ||
BackoffMaxDelay: "10s", | ||
BackoffFactor: 2, | ||
}, | ||
Timeout: &TimeoutPolicyConfig{ | ||
Duration: "30s", | ||
}, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prevent unintended sharing of default configurations by avoiding pointer usage.
Currently, NetworkDefaultFailsafeConfig
is defined as a pointer to a FailsafeConfig
. When assigning it to Failsafe
in rawNetworkConfig
, all instances will share the same underlying data. Any modifications to Failsafe
may affect the default configuration and other instances that use it. To prevent this, consider defining NetworkDefaultFailsafeConfig
as a value instead of a pointer.
Apply this diff to change NetworkDefaultFailsafeConfig
to a value:
-var NetworkDefaultFailsafeConfig = &FailsafeConfig{
+var NetworkDefaultFailsafeConfig = FailsafeConfig{
Hedge: &HedgePolicyConfig{
Delay: "200ms",
MaxCount: 3,
},
Retry: &RetryPolicyConfig{
MaxAttempts: 3,
Delay: "1s",
Jitter: "500ms",
BackoffMaxDelay: "10s",
BackoffFactor: 2,
},
Timeout: &TimeoutPolicyConfig{
Duration: "30s",
},
}
Then, in line 500, update the assignment:
raw := rawNetworkConfig{
- Failsafe: NetworkDefaultFailsafeConfig,
+ Failsafe: &NetworkDefaultFailsafeConfig,
}
This way, each NetworkConfig
will have its own copy of the default FailsafeConfig
, preventing unintended shared mutations.
Committable suggestion was skipped due to low confidence.
if lg.GetLevel() == zerolog.TraceLevel { | ||
lg.Debug().Err(errCall).Interface("response", resp).Msgf("upstream call result received") | ||
} else { | ||
lg.Debug().Err(errCall).Msgf("upstream call result received") | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent logging levels between condition and log statements
At lines 290-294, the condition checks if lg.GetLevel() == zerolog.TraceLevel
, but the log statements use lg.Debug()
. This inconsistency might cause confusion regarding which logs are captured at different logging levels. Consider adjusting the logging level in the condition or the log statements for consistent behavior.
Apply this diff to fix the inconsistency:
if lg.GetLevel() == zerolog.TraceLevel {
- lg.Debug().Err(errCall).Interface("response", resp).Msgf("upstream call result received")
+ lg.Trace().Err(errCall).Interface("response", resp).Msgf("upstream call result received")
} else {
lg.Debug().Err(errCall).Msgf("upstream call result received")
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if lg.GetLevel() == zerolog.TraceLevel { | |
lg.Debug().Err(errCall).Interface("response", resp).Msgf("upstream call result received") | |
} else { | |
lg.Debug().Err(errCall).Msgf("upstream call result received") | |
} | |
if lg.GetLevel() == zerolog.TraceLevel { | |
lg.Trace().Err(errCall).Interface("response", resp).Msgf("upstream call result received") | |
} else { | |
lg.Debug().Err(errCall).Msgf("upstream call result received") | |
} |
client := 0 | ||
transport := 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Include transport
errors in the summarized reasons
In the SummarizeCauses
method of ErrUpstreamsExhausted
, the transport
error count is incremented but not included in the reasons
slice for summarization. To provide a complete error summary, consider adding the transport
errors to the reasons
.
Apply this diff to include transport
errors in the summary:
if cancelled > 0 {
reasons = append(reasons, fmt.Sprintf("%d hedges cancelled", cancelled))
}
+ if transport > 0 {
+ reasons = append(reasons, fmt.Sprintf("%d transport errors", transport))
+ }
if client > 0 {
reasons = append(reasons, fmt.Sprintf("%d user errors", client))
}
Also applies to: 731-736, 767-769
if _, ok := cause.(*BaseError); ok { | ||
return &ErrJsonRpcRequestUnmarshal{ | ||
BaseError{ | ||
Code: ErrCodeJsonRpcRequestUnmarshal, | ||
Message: "failed to unmarshal json-rpc request", | ||
Cause: cause, | ||
}, | ||
} | ||
} else if cause != nil { | ||
return &ErrJsonRpcRequestUnmarshal{ | ||
BaseError{ | ||
Code: ErrCodeJsonRpcRequestUnmarshal, | ||
Message: fmt.Sprintf("%s", cause), | ||
}, | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure consistent handling of Cause
and Message
in NewErrJsonRpcRequestUnmarshal
In the NewErrJsonRpcRequestUnmarshal
function, there is inconsistent handling of the Cause
and Message
fields:
- In the first case, both
Message
andCause
are set. - In the
else if
,Message
is set to the error string ofcause
, butCause
is not set. - In the final
else
, neitherCause
nor a specificMessage
fromcause
is set.
Consider setting the Cause
field in all cases for consistency, and ensure that the Message
accurately reflects the error.
Apply this diff to make the error handling consistent:
} else if cause != nil {
return &ErrJsonRpcRequestUnmarshal{
BaseError{
Code: ErrCodeJsonRpcRequestUnmarshal,
- Message: fmt.Sprintf("%s", cause),
+ Message: "failed to unmarshal json-rpc request",
+ Cause: cause,
},
}
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if _, ok := cause.(*BaseError); ok { | |
return &ErrJsonRpcRequestUnmarshal{ | |
BaseError{ | |
Code: ErrCodeJsonRpcRequestUnmarshal, | |
Message: "failed to unmarshal json-rpc request", | |
Cause: cause, | |
}, | |
} | |
} else if cause != nil { | |
return &ErrJsonRpcRequestUnmarshal{ | |
BaseError{ | |
Code: ErrCodeJsonRpcRequestUnmarshal, | |
Message: fmt.Sprintf("%s", cause), | |
}, | |
} | |
} | |
if _, ok := cause.(*BaseError); ok { | |
return &ErrJsonRpcRequestUnmarshal{ | |
BaseError{ | |
Code: ErrCodeJsonRpcRequestUnmarshal, | |
Message: "failed to unmarshal json-rpc request", | |
Cause: cause, | |
}, | |
} | |
} else if cause != nil { | |
return &ErrJsonRpcRequestUnmarshal{ | |
BaseError{ | |
Code: ErrCodeJsonRpcRequestUnmarshal, | |
Message: "failed to unmarshal json-rpc request", | |
Cause: cause, | |
}, | |
} | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (13)
common/request.go (4)
Line range hint
39-42
: LGTM: Added UseUpstream field to RequestDirectivesThe addition of the
UseUpstream
field enhances the flexibility of upstream routing. This change aligns well with the improvements in upstream handling mentioned in the summary.Consider adding an example in the comment to illustrate the wildcard usage, e.g., "For example, 'alchemy*' would target all upstreams starting with 'alchemy'."
82-85
: LGTM: Thread-safe access to lastUpstream with improved nil handlingThe use of
atomic.Value.Load()
ensures thread-safe access to thelastUpstream
field, consistent with the atomic type changes. The additional nil checking prevents potential panics.Consider using a type assertion with comma-ok idiom to handle potential type mismatches safely:
if lu := r.lastUpstream.Load(); lu != nil { if upstream, ok := lu.(Upstream); ok { return upstream } } return nilThis change would prevent panics if an incorrect type is accidentally stored in
lastUpstream
.
109-146
: LGTM: Improved Id method with robust type handling and concurrency safetyThe updated
Id
method now returnsint64
, aligning with the use of more specific types for identifiers. The implementation handles various ID formats more robustly and uses atomic operations for thread-safe storage of the parsed ID.Consider improving the error handling when parsing string IDs. Currently, if
strconv.ParseInt
fails, it silently sets the ID to 0. It might be better to return an error in this case:uid, err := strconv.ParseInt(ids, 0, 64) if err != nil { return 0, fmt.Errorf("failed to parse ID as int64: %w", err) } r.uid.Store(uid) return uid, nilThis change would make it clear when an ID couldn't be parsed correctly, allowing the caller to handle this situation appropriately.
163-187
: LGTM: Enhanced ApplyDirectivesFromHttp with query args supportThe renamed and updated
ApplyDirectivesFromHttp
method now handles directives from both headers and query arguments, improving flexibility. The use ofutil.Mem2Str
aligns with the standardization of string handling mentioned in the summary.For consistency, consider using the same approach for all directive assignments. For example:
if useUpstream := util.Mem2Str(queryArgs.Peek("use-upstream")); useUpstream != "" { drc.UseUpstream = useUpstream } else if useUpstream := util.Mem2Str(headers.Peek("X-ERPC-Use-Upstream")); useUpstream != "" { drc.UseUpstream = useUpstream }This approach would prioritize query args over headers for all directives, making the behavior more consistent and predictable.
common/json_rpc.go (3)
82-120
: Consider revising empty string ID handlingThe
parseID
method is well-implemented overall, but the handling of empty string IDs might be improved. Currently, an empty string ID is silently ignored, which could lead to unexpected behavior.Consider either:
- Treating an empty string as an error case.
- Assigning a default ID value (like 0 or a random ID) for empty strings.
This would make the behavior more explicit and predictable.
209-274
: Consider improving error message for empty responsesThe
ParseError
method has been significantly improved to handle various error formats. However, the error message for empty responses ("unexpected empty response from upstream endpoint") could be more specific.Consider customizing this message based on whether the response was truly empty or just "null". For example:
- For an empty string: "Received an empty response from upstream endpoint"
- For "null": "Received a null response from upstream endpoint"
This would provide more precise information about the nature of the empty response.
343-396
: Consider enhancing error handling inGetReader
The
GetReader
method has been significantly improved for memory efficiency. However, error handling could be enhanced:
- The error from
SonicCfg.Marshal(r.Error)
(line 376) is captured but not logged.- Consider wrapping this error with additional context before returning.
For example:
if err != nil { return nil, fmt.Errorf("failed to marshal error: %w", err) }This would provide more context if an error occurs during marshaling.
erpc/http_server_test.go (6)
338-537
: Comprehensive new test cases enhance error handling coverage.The addition of new test cases such as InvalidJSON, UnsupportedMethod, and IgnoredMethod greatly improves the test coverage for error scenarios and edge cases. These tests ensure that the server handles various error conditions correctly.
The consistent use of gock for mocking HTTP responses is commendable and helps in creating reliable tests.
Consider adding a test case for handling malformed JSON-RPC requests (e.g., missing "jsonrpc" field) to further improve coverage.
539-603
: Robust handling of incomplete JSON-RPC requests.The new test cases for handling missing or zero-value ID and JSONRPC fields demonstrate the server's ability to automatically correct incomplete requests. This feature enhances the server's robustness and compatibility with various clients.
Consider adding a test case where both ID and JSONRPC fields are missing to ensure the server can handle the most minimal valid requests.
Line range hint
605-725
: Comprehensive testing for multiple upstream configurations.The TestHttpServer_MultipleUpstreams function effectively tests the server's ability to handle multiple upstream configurations. The inclusion of tests for upstream selection based on headers and query parameters is particularly valuable.
Consider adding a test case for fallback behavior when an invalid upstream is specified in the header or query parameter.
727-775
: Valuable integration tests with real DRPC endpoint.The addition of integration tests with a real DRPC endpoint is commendable. These tests ensure that the server can handle real-world scenarios correctly.
Consider adding more integration test cases to cover other common scenarios, such as successful requests and rate limiting. Also, ensure that these tests can be easily disabled or mocked for CI/CD pipelines where external dependencies might be undesirable.
777-844
: Excellent test fixture setup function improves test maintainability.The createServerTestFixtures function is a valuable addition that enhances the maintainability and readability of the tests. It effectively encapsulates the complexity of setting up the test environment and provides a reusable way to send requests in tests.
Consider adding an option to customize the server's shutdown behavior in the returned function. This would allow for more flexible cleanup in individual test cases.
Line range hint
1-844
: Significant improvements in test coverage, structure, and maintainability.The changes to this file represent a substantial enhancement to the test suite for the HTTP server. Key improvements include:
- Expanded test coverage for various scenarios, including error handling and edge cases.
- Introduction of table-driven tests and configuration-based testing for improved flexibility.
- Addition of integration tests with real-world endpoints.
- Implementation of helper functions for better code reusability and test setup.
These changes greatly increase the robustness and reliability of the test suite.
Consider implementing a mock upstream server within the test package to reduce reliance on external services in integration tests. This would improve test stability and reduce the potential for flaky tests due to network issues or external service changes.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (4)
- common/json_rpc.go (4 hunks)
- common/request.go (10 hunks)
- erpc/evm_json_rpc_cache_test.go (13 hunks)
- erpc/http_server_test.go (8 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- erpc/evm_json_rpc_cache_test.go
🧰 Additional context used
🔇 Additional comments (12)
common/request.go (7)
15-16
: LGTM: Added RequestContextKey constantThe addition of
RequestContextKey
enhances context management capabilities for requests. This change aligns well with the overall improvements in request handling mentioned in the summary.
47-53
: LGTM: Improved concurrency handling with atomic typesThe switch to atomic types for
uid
,lastValidResponse
, andlastUpstream
enhances thread safety and potentially improves performance in high-concurrency scenarios. This change aligns well with similar updates in other files mentioned in the summary.As suggested in a previous review comment, consider adding benchmarks to measure the performance impact of using atomic types for the
NormalizedRequest
struct.
74-75
: LGTM: Thread-safe update of lastUpstreamThe use of
atomic.Value.Store()
ensures thread-safe updates to thelastUpstream
field, consistent with the switch to atomic types in the struct.
92-92
: LGTM: Thread-safe operations for lastValidResponseThe use of atomic operations (
Store
andLoad
) inSetLastValidResponse
andLastValidResponse
methods ensures thread-safe updates and access to thelastValidResponse
field. These changes are consistent with the overall improvements in concurrency handling mentioned in the summary.Also applies to: 99-99
230-230
: LGTM: Updated JSON unmarshalling in JsonRpcRequest methodThe switch to
SonicCfg.Unmarshal
aligns with the changes in JSON handling mentioned in the summary and is consistent with similar updates in other parts of the codebase.As suggested in a previous review comment, consider adding benchmarks to compare the performance of
SonicCfg.Unmarshal
with the previoussonic.Unmarshal
.
276-276
: LGTM: Updated string conversion in MarshalZerologObjectThe use of
util.Mem2Str
for converting the body to a string aligns with the standardization of string handling mentioned in the summary and is consistent with similar changes across the codebase.As suggested in a previous review comment, consider adding benchmarks to compare the performance of
util.Mem2Str
with the previous direct conversion method.
317-319
: LGTM: Updated JSON marshalling in MarshalJSON methodThe switch to
SonicCfg.Marshal
is consistent with the earlier change toSonicCfg.Unmarshal
and aligns with the improvements in JSON handling mentioned in the summary.As suggested in a previous review comment, ensure that all remaining uses of
sonic.Marshal
in other files are updated toSonicCfg.Marshal
for consistency across the codebase.Also applies to: 323-324
common/json_rpc.go (4)
21-44
: LGTM: ImprovedJsonRpcResponse
structureThe changes to the
JsonRpcResponse
struct enhance thread-safety and performance through the addition of mutex locks and support for lazy loading. This is a good improvement for concurrent operations.
59-80
: LGTM: Well-implementedNewJsonRpcResponseFromBytes
methodThis new method provides a clean way to create a
JsonRpcResponse
from raw byte data, with proper error handling for parsing the ID and error fields. It's a valuable addition for working with raw JSON-RPC responses.
164-207
: LGTM: EfficientParseFromStream
implementationThe
ParseFromStream
method is a great addition, usingast.Searcher
for efficient parsing without unmarshaling the entire response. This approach minimizes memory usage and improves performance when working with large responses.
Line range hint
1-622
: Overall, excellent improvements to JSON-RPC handlingThe changes in this file significantly enhance the JSON-RPC handling capabilities of the codebase. Key improvements include:
- Better concurrency handling with the introduction of mutex locks.
- More efficient parsing and marshaling of JSON-RPC messages.
- Improved error handling and ID management.
- Performance optimizations, particularly in the
GetReader
method.These changes should result in a more robust, efficient, and thread-safe implementation. Great work!
erpc/http_server_test.go (1)
262-289
: Improved test structure and coverage for single upstream scenarios.The changes in this section significantly enhance the test coverage and structure:
- Renamed the test function to better reflect its purpose.
- Introduced a configuration-based approach for testing various scenarios.
- Added new test cases to cover different aspects of single upstream behavior.
These improvements make the tests more comprehensive and easier to maintain. The use of configuration cases allows for testing multiple scenarios without duplicating code.
Also applies to: 291-336
func (r *JsonRpcRequest) UnmarshalJSON(data []byte) error { | ||
type Alias JsonRpcRequest | ||
aux := &struct { | ||
*Alias | ||
ID json.RawMessage `json:"id,omitempty"` | ||
}{ | ||
Alias: (*Alias)(r), | ||
} | ||
aux.JSONRPC = "2.0" | ||
|
||
if err := SonicCfg.Unmarshal(data, &aux); err != nil { | ||
return err | ||
} | ||
|
||
if aux.ID == nil { | ||
r.ID = util.RandomID() | ||
} else { | ||
var id interface{} | ||
if err := SonicCfg.Unmarshal(aux.ID, &id); err != nil { | ||
return err | ||
} | ||
|
||
switch v := id.(type) { | ||
case float64: | ||
r.ID = int64(v) | ||
case string: | ||
if v == "" { | ||
r.ID = util.RandomID() | ||
} else { | ||
var parsedID float64 | ||
if err := SonicCfg.Unmarshal([]byte(v), &parsedID); err != nil { | ||
return err | ||
} | ||
r.ID = int64(parsedID) | ||
} | ||
case nil: | ||
r.ID = util.RandomID() | ||
default: | ||
r.ID = util.RandomID() | ||
} | ||
} | ||
|
||
if r.ID == 0 { | ||
r.ID = util.RandomID() | ||
} | ||
|
||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider simplifying ID parsing logic
The UnmarshalJSON
method for JsonRpcRequest
has been improved to handle various ID types more robustly. However, the logic could potentially be simplified:
- The nested
if-else
structure for handling string IDs (lines 453-461) could be flattened. - The final check for
r.ID == 0
(lines 469-471) might be unnecessary given the previous logic.
Consider refactoring this section to reduce nesting and potentially remove redundant checks. This could improve readability without sacrificing functionality.
@@ -35,7 +35,7 @@ func TestHttpServer_RaceTimeouts(t *testing.T) { | |||
|
|||
cfg := &common.Config{ | |||
Server: &common.ServerConfig{ | |||
MaxTimeout: "200ms", // Set a very short timeout for testing | |||
MaxTimeout: "500ms", // Set a very short timeout for testing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Confirmed: Timeout settings and assertions have been updated as described.
The following changes have been verified in erpc/http_server_test.go
:
MaxTimeout
increased from 200ms to 500ms at line 38.- Error message assertions updated to use lowercase "timeout" in multiple locations.
These modifications may reduce the likelihood of triggering timeout scenarios and indicate changes in the server's error handling logic. Please ensure that these changes align with the intended testing behavior and server functionality.
🔗 Analysis chain
Timeout configuration and error message changes may affect test behavior.
The following changes have been made:
- MaxTimeout increased from 200ms to 500ms in ServerConfig.
- Error message assertions now check for lowercase "timeout" instead of "Timeout".
These modifications might impact the test's ability to trigger timeout scenarios and suggest changes in the server's error handling logic. Ensure that these changes align with the intended behavior of the server and the purpose of the tests.
To verify the impact of these changes, you can run the following script:
Also applies to: 145-145, 154-154, 164-164, 176-176, 215-215
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for occurrences of timeout-related configurations and assertions
grep -n -r --include="*.go" "MaxTimeout.*ms" .
grep -n -r --include="*.go" "assert.*[Tt]imeout" .
Length of output: 1225
Summary by CodeRabbit
New Features
GetProjects
added to theERPC
struct to retrieve all projects.GetAll
added to theProjectsRegistry
struct to access all prepared projects.AlchemyHttpJsonRpcClient
.RandomID
for generating random JSON-RPC IDs.GetMetricsTracker
to theUpstreamsRegistry
class.Bug Fixes
SupportsNetwork
method across multiple clients.UpstreamsRegistry
.Documentation
Chores
go.mod
file.Tests