-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support limits on attributes, links and events #329
Conversation
Attribute limits can be set per event, link and span, as well as general attribute limits that are used if no specific limits are set for the individual type. The number of attributes, links and events dropped do to being over the limit is tracked and required changing each to be a record that holds both the limits to apply and how many have been dropped.
Codecov Report
@@ Coverage Diff @@
## main #329 +/- ##
==========================================
- Coverage 38.63% 38.29% -0.35%
==========================================
Files 51 54 +3
Lines 3499 3562 +63
==========================================
+ Hits 1352 1364 +12
- Misses 2147 2198 +51
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
e6d7e23
to
563c71d
Compare
Fixed the issue with adding attributes where the key already exists but the size limit has been reached. Not really efficient... but it works for now. |
Open question that may lead to another PR is if we should get rid of stuff like Since they don't know about the limits on attributes those records returned by the functions aren't "finished", when adding them to the span the attributes have to be parsed again. I'd lean towards removing them but they are likely already in use by some users.. a little. |
Think I got a partial middle ground. I changed the link[s] and event[s] functions to return maps representing each instead of records and moved the records to the SDK where they are better fit. It should probably still be simplified but annoyingly don't know what ppl are already relying on and hard to get feedback. |
39b6c38
to
56f2712
Compare
56f2712
to
1a6569f
Compare
|
||
add_events(NewEvents, Events=#events{count_limit=CountLimit, | ||
list=List}) when length(List) < CountLimit -> | ||
Limit = CountLimit - length(List), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this calculates the length of the list twice per event addition. At low counts that's probably cheap, but this may start becoming a significant CPU hog for people with custom limits set higher. It would probably be more efficient to count the limit as an attribute integer (like with dropped
and count_limit
) and then pattern match on the values.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, I was debating keeping it in the record since it would be a lot of work for a user to go around the add functions to update the list, so it shouldn't be an issue that it gets out of sync. But figured it was an optimization to do later.
But one thing I can do quick is there is no need to check in the guard, it can just call add_events_
and return immediately since Limit
will be 0.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah that's probably a good workaround to save one call at least, +1 on that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm approving considering this is to get us to a release candidate, but I have some concerns about performance of some operations, and highlighted a spot where the merging of maps may be counter-intuitive in its results (and maybe a sort() could be used to be deterministic)
@@ -614,22 +636,86 @@ record_exception_with_message_works(Config) -> | |||
SpanCtx = ?start_span(<<"span-1">>), | |||
try throw(my_error) of | |||
_ -> | |||
ok | |||
ok |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can actually use try throw(my_error) catch ...
and skip the of Pat -> Clause
section since this is guaranteed to fail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, yea, thanks, don't know where that came from.
%% dropped because count limit was set to 2 | ||
?set_attribute(<<"attr-4">>, <<"attr-value-4">>), | ||
%% won't be dropped because attr-1 already exists so it overrides the value | ||
?set_attribute(<<"attr-1">>, <<"attr-value-5">>), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh interesting semantics there where the oldest value wins as long as the keys are new. Cool gotcha on the limits' behaviour.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, something I got wrong the first time (fixed before opening the PR) :)
for each unique attribute key, addition of which would result in exceeding the limit, SDK MUST discard that key/value pair.
%% we can't just use `otel_attributes:set' since that will drop instead | ||
%% of override a key/value pair when the limit is reached even if the | ||
%% new attribute matches a key in the current attributes | ||
{otel_resource, otel_attributes:new(maps:merge(CurrentMap, UpdatingMap), 128, 255)}. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A gotcha here is that your merging won't necessarily respect the insertion order, just the iteration order. So on this operation the validation may become really tricky in terms of what's left over, and change from time to time even if the original maps compare equal. The returned values of maps:from_list
and iterators isn't guaranteed as far as I know.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is actually wrong. This was done before I learned about the mistake I made reading the spec for attributes. This can rely on the regular attributes module because that does not drop the new attributes like I originally had it doing.
Closes #317 #318 #319 #320 #321 #323 #324
Sadly a lot here. Opening as a draft because there is still a bug in attributes update I need to fix (it is marked with a
TODO
right now). And I want to do a review myself, I'm opening this once tests and dialyzer passed just to get it open tonight.I also want to revisit the API in
opentelemetry
module that creates stuff likeevents
andlinks
and whether those should be removed. Right now the code inotel_links
andotel_events
has to deal with the passed in links and events being multiple different formats.Just realized I didn't run the Elixir tests and there are failures there because they compare stuff like the resulting attributes from an export so I'll have to fix that up in the morning.