You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In cases where users have configured an S3 sink for their reports, a presigned URL is returned to the chalk client from the chalk api server, and the report is subsequently uploaded to S3 as a big blob. This approach requires consumers of the blob to parse the entire json and filter out potentially large keys (e.g., SBOM/SAST, or other user-provided keys) that they don't care about, or if they do care about them, they still need to pass the entire data around to other components potentially.
It would be useful if chalk exposed some option to upload large keys in s3 and include those links as "values" of said keys in the report. This way the final report can be kept small, whilst the data uploaded by the client would be approximately the same
Result
If the option is enabled, large keys above a given threshold (e.g., 50KB) could be getting stored in S3 direclty, and values for said keys would be the path to the s3 object
The text was updated successfully, but these errors were encountered:
Description
In cases where users have configured an S3 sink for their reports, a presigned URL is returned to the chalk client from the chalk api server, and the report is subsequently uploaded to S3 as a big blob. This approach requires consumers of the blob to parse the entire json and filter out potentially large keys (e.g., SBOM/SAST, or other user-provided keys) that they don't care about, or if they do care about them, they still need to pass the entire data around to other components potentially.
It would be useful if chalk exposed some option to upload large keys in s3 and include those links as "values" of said keys in the report. This way the final report can be kept small, whilst the data uploaded by the client would be approximately the same
Result
If the option is enabled, large keys above a given threshold (e.g., 50KB) could be getting stored in S3 direclty, and values for said keys would be the path to the s3 object
The text was updated successfully, but these errors were encountered: