Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Skip lambda allocation in FindTransientAnnotation #2154

Merged
merged 6 commits into from
Aug 13, 2021

Conversation

joaocpaiva
Copy link
Contributor

@joaocpaiva joaocpaiva commented Aug 6, 2021

Issues

Total allocation of this method is at 3.09%:

  • 1% for delegate
  • 1.61% for enumerator
  • 0.48% for closure.

image

Description

This change replaces the usage of Linq in this method, with manual implementation, avoiding lambda allocation. Every time we invoke Any, it allocates a lambda. This is normally meaningless, when not present in the hot path. Since this is present in AGS hot path, it is costing 1% of allocations just in this lambda. This is a lot.

Also removes the usage of enumerator, replacing with O(1) random access for loop. The ArrayVersioningList is better for search scenarios because we can save the enumerator allocation all together, and use O(1) random access. The LinkedVersioningList would only be useful if this data structure is changing very often at runtime, which I don't think is the case? Main scenario will be search at runtime? So I am proposing to replace with ArrayVersioningList. Furthermore, when adding a new element, it was swapping to ArrayVersioningList if there were 5 or more elements, hence I don't really see the benefit of not using ArrayVersioningList for all cases, particular if the search scenario is the main path.

Not using Linq.Any(), is not only good for memory but also for CPU as we can see in this benchmark:

image

Checklist (Uncheck if it is not completed)

  • Test cases added
  • Build and test with one-click build and test script passed

Copy link
Contributor

@gathogojr gathogojr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LG

@pull-request-quantifier-deprecated

This PR has 188 quantified lines of changes. In general, a change size of upto 200 lines is ideal for the best PR experience!


Quantification details

Label      : Medium
Size       : +51 -137
Percentile : 57.6%

Total files changed: 4

Change summary by file extension:
.cs : +51 -137

Change counts above are quantified counts, based on the PullRequestQuantifier customizations.

Why proper sizing of changes matters

Optimal pull request sizes drive a better predictable PR flow as they strike a
balance between between PR complexity and PR review overhead. PRs within the
optimal size (typical small, or medium sized PRs) mean:

  • Fast and predictable releases to production:
    • Optimal size changes are more likely to be reviewed faster with fewer
      iterations.
    • Similarity in low PR complexity drives similar review times.
  • Review quality is likely higher as complexity is lower:
    • Bugs are more likely to be detected.
    • Code inconsistencies are more likely to be detetcted.
  • Knowledge sharing is improved within the participants:
    • Small portions can be assimilated better.
  • Better engineering practices are exercised:
    • Solving big problems by dividing them in well contained, smaller problems.
    • Exercising separation of concerns within the code changes.

What can I do to optimize my changes

  • Use the PullRequestQuantifier to quantify your PR accurately
    • Create a context profile for your repo using the context generator
    • Exclude files that are not necessary to be reviewed or do not increase the review complexity. Example: Autogenerated code, docs, project IDE setting files, binaries, etc. Check out the Excluded section from your prquantifier.yaml context profile.
    • Understand your typical change complexity, drive towards the desired complexity by adjusting the label mapping in your prquantifier.yaml context profile.
    • Only use the labels that matter to you, see context specification to customize your prquantifier.yaml context profile.
  • Change your engineering behaviors
    • For PRs that fall outside of the desired spectrum, review the details and check if:
      • Your PR could be split in smaller, self-contained PRs instead
      • Your PR only solves one particular issue. (For example, don't refactor and code new features in the same PR).

How to interpret the change counts in git diff output

  • One line was added: +1 -0
  • One line was deleted: +0 -1
  • One line was modified: +1 -1 (git diff doesn't know about modified, it will
    interpret that line like one addition plus one deletion)
  • Change percentiles: Change characteristics (addition, deletion, modification)
    of this PR in relation to all other PRs within the repository.


Was this comment helpful? 👍  :ok_hand:  :thumbsdown: (Email)
Customize PullRequestQuantifier for this repository.

@mikepizzo mikepizzo added this to the 7.9.1 milestone Aug 10, 2021
@@ -68,7 +68,7 @@ public override int Count

public override VersioningList<TElement> Add(TElement value)
{
return new LinkedVersioningList(this, value);
return new ArrayVersioningList(this, value);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand the implementation, this results in a new array being allocated, and elements copied from the previous array to the new one. Doesn't this lead to a larger memory footprint and more GC over time? Is the rationale that we're willing to sacrifice memory in order to get faster search times?

@habbes habbes merged commit 6003304 into OData:master Aug 13, 2021
@joaocpaiva joaocpaiva deleted the jpaiva/FindTransientAnnotation2 branch August 26, 2021 16:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants