-
Notifications
You must be signed in to change notification settings - Fork 183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Working around NATs 64M entry size limit #731
Comments
For this issue, in the time being to not add in any additional complexity, we have a short term gap close by having all the collectors mimic guacone and talk directly to the graphQL endpoint #1104 |
Another proposal that came up last community call from @dejanb is to have an API to handle the documents directly by the ingestor, this would not introduce any new running services. |
Based on further discussion, the decision was made to use a blob store (via cloud provider) to store SBOMs and emit an entry to the URI in the pub/sub queue. For quick demos and testing, the pub/sub will not be used to allow for quick usability without the overhead of the blob store. Tasks based on these:
|
FYI - cdevents/spec#171 is now merged, I hope we'll be able to release v0.4 in the next month with the SBOM URI in. |
Due to the limits of NATs having a data limit of 64MB, we can't pass in documents through the regular collector flow. In the longer term, we want to have a solution that will be able to pass a reference to a document with a temporary data store that can be pointed to in the NATs entry. However, in the meantime, the alternative is that for file ingestion, we will do the ingestion as part of the binary and talk directly to the GraphQL mutation API. I.e., we will use
guacone files
for file ingestion which will do ingestion (parsing) locally.The text was updated successfully, but these errors were encountered: