You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 8, 2024. It is now read-only.
Current inferencing report only provides percentile metrics for LightGBM C API, that's because this is the only variant implementing latency measurement at the request level. The general issue here is that predictions at request level for python API will measure a lot of overhead (which might be a good thing to surface).
The text was updated successfully, but these errors were encountered:
Current inferencing report only provides percentile metrics for LightGBM C API, that's because this is the only variant implementing latency measurement at the request level. The general issue here is that predictions at request level for python API will measure a lot of overhead (which might be a good thing to surface).
The text was updated successfully, but these errors were encountered: