-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add PriorityQueue to System.Collections.Generic (#43957) #46009
Conversation
This commit adds a new data structure, priority queue. Fixes dotnet#43957
Note regarding the This serves as a reminder for when your PR is modifying a ref *.cs file and adding/modifying public APIs, to please make sure the API implementation in the src *.cs file is documented with triple slash comments, so the PR reviewers can sign off that change. |
Tagging subscribers to this area: @eiriktsarpalis Issue DetailsDRAFT, TO RECEIVE INITIAL FEEDBACK ON THE SCAFFOLDING This pull request adds a new data structure, The priority queue functionality has been long requested by the community, with the API discussion that lasted 5 years. Fixes #43957
|
src/libraries/System.Collections/src/System/Collections/Generic/PriorityQueue.cs
Outdated
Show resolved
Hide resolved
Seems about right (I didn't check the API but no doubt it's as reviewed). Now for the implementation! BTW as part of this work we will need tests for it in the dotnet/performance repo. There's lots of examples and good docs there. |
Thank you! 😊 btw, do you know why the build may be failing on an iOS release? Seems like some transient error ( |
@steveisok who should look at this? it's new to me:
@pgolebiowski I would ignore this for now, but in general you can rerun CI by closing and reopening the PR with the buttons here. It is possible to rerun individual legs but I am not sure whether you need write permissions - just ask if you need to. But for now unless something is clearly your responsibility, I would ignore it. One other thing, as you see we run CI even on draft PR's and each time you push it restarts CI, which competes with other jobs. It's fine to have this PR started but you will now probably want to push work all together when you have something pretty much ready for review. |
@danmosemsft New to me as well. We'll look into it. |
From the looks of things, |
@steveisok I'm not so sure, |
Reference APIs look good to me. |
In this commit, I modified the API reference using [these guidelines](https://github.com/dotnet/runtime/blob/4d784693ebc5f91c7eede32170046355ef3969b2/docs/coding-guidelines/updating-ref-source.md), following the advice from @danmosemsft. The automatically generated code (with `dotnet msbuild /t:GenerateReferenceAssemblySource`) didn't build out of the box and I tweaked it manually.
Added generic tests for <int, int> and <string, string>. Removed non-generic tests.
This commit adds the core of the heap implementation for the priority queue. It doesn't implement the method `EnqueueDequeue`, as I'm not convinced by it. It also doesn't implement the method `CopyTo(Array array, int index)`, leaving this one for later.
Hey, would be great if you could have a look when you have the time and share your thoughts on the code 😊 Two questions:
|
src/libraries/System.Collections/src/System/Collections/Generic/PriorityQueue.cs
Outdated
Show resolved
Hide resolved
/// <summary> | ||
/// Specifies the arity of the d-ary heap, which here is quaternary. | ||
/// </summary> | ||
private const int Arity = 4; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How did you arrive at this arity being the best, rather than using 2? Can you share what performance tests you ran?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the past @pgolebiowski has cited publications (non .NET specific) that point to 4-ary heaps being optimal. I've validated this empirically when working on my own prototypes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
4-ary heaps are frequently compared to classic binary heaps which performs 2 (1 to select min value from two child nodes and 1 to compare this min value with current element) compassion operations per level of heapify down. In this context 4-ary heaps need to perform same 4 operations (3 to select min for child nodes and 1 to compare this min value with current element) per revel, but 1 level in 4-ary heap is same as 2 levels of binary heap. But 4-ary heaps have better cache locality and less number of compassion operations for heapify up. So clear win, but ..
There is common optimization for classic binary heaps which allows doing only 1 compassion per level by essentially only comparing child nodes to find empty node at the end of heap and later try to place last element into this empty node. Since we replacing last node we would also need to run heapify up, but in practice this generates less number of compassion operations.
See this Stack Overflow thread:
https://stackoverflow.com/questions/6531543/efficient-implementation-of-binary-heaps
So at the end this would heavy depend on how expensive IComparer implementation / call is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's an interesting avenue to explore, which should be validated using benchmarks that combine all operations.
So at the end this would heavy depend on how expensive IComparer implementation / call is.
I think it is safe to assume that comparisons constitute the most expensive operations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is common optimization for classic binary heaps which allows doing only 1 compassion per level by essentially only comparing child nodes to find empty node at the end of heap and later try to place last element into this empty node. Since we replacing last node we would also need to run heapify up, but in practice this generates less number of compassion operations.
FWIW this could be generalized to d-ary heaps, where sifting down the empty node takes d-1 comparisons per level. I made an attempt at prototyping the approach for both binary and quad heaps, here are the results of a basic heapsort benchmark:
Standard heap implementation
Method | Size | Mean | Error | StdDev | Gen 0 | Gen 1 | Gen 2 | Allocated |
---|---|---|---|---|---|---|---|---|
PriorityQueue_Quad | 10 | 570.3 ns | 11.13 ns | 14.47 ns | - | - | - | - |
PriorityQueue_Binary | 10 | 469.8 ns | 9.07 ns | 9.70 ns | - | - | - | - |
PriorityQueue_Quad | 50 | 4,878.1 ns | 93.69 ns | 107.89 ns | - | - | - | - |
PriorityQueue_Binary | 50 | 4,375.7 ns | 85.61 ns | 84.09 ns | - | - | - | - |
PriorityQueue_Quad | 150 | 21,555.0 ns | 414.36 ns | 323.51 ns | - | - | - | - |
PriorityQueue_Binary | 150 | 20,029.4 ns | 318.20 ns | 282.08 ns | - | - | - | - |
PriorityQueue_Quad | 500 | 101,990.0 ns | 1,697.57 ns | 1,587.91 ns | - | - | - | - |
PriorityQueue_Binary | 500 | 88,171.7 ns | 1,751.22 ns | 2,566.92 ns | - | - | - | - |
PriorityQueue_Quad | 1000 | 228,207.5 ns | 3,342.00 ns | 3,126.11 ns | - | - | - | - |
PriorityQueue_Binary | 1000 | 196,171.9 ns | 3,796.05 ns | 5,196.08 ns | - | - | - | - |
PriorityQueue_Quad | 10000 | 3,135,853.5 ns | 41,014.71 ns | 34,249.14 ns | - | - | - | - |
PriorityQueue_Binary | 10000 | 2,758,276.9 ns | 54,537.92 ns | 70,914.70 ns | - | - | - | - |
PriorityQueue_Quad | 1000000 | 517,934,746.7 ns | 9,633,228.63 ns | 9,010,928.04 ns | - | - | - | 1336 B |
PriorityQueue_Binary | 1000000 | 457,966,469.2 ns | 7,727,865.81 ns | 6,453,117.95 ns | - | - | - | - |
SiftDown optimization enabled
Method | Size | Mean | Error | StdDev | Gen 0 | Gen 1 | Gen 2 | Allocated |
---|---|---|---|---|---|---|---|---|
PriorityQueue_Quad | 10 | 591.2 ns | 11.80 ns | 23.30 ns | - | - | - | - |
PriorityQueue_Binary | 10 | 478.3 ns | 9.39 ns | 13.17 ns | - | - | - | - |
PriorityQueue_Quad | 50 | 4,648.2 ns | 91.57 ns | 105.45 ns | - | - | - | - |
PriorityQueue_Binary | 50 | 3,777.7 ns | 74.84 ns | 109.70 ns | - | - | - | - |
PriorityQueue_Quad | 150 | 20,948.2 ns | 403.05 ns | 509.73 ns | - | - | - | - |
PriorityQueue_Binary | 150 | 17,050.4 ns | 219.24 ns | 171.17 ns | - | - | - | - |
PriorityQueue_Quad | 500 | 93,228.2 ns | 1,839.66 ns | 1,806.79 ns | - | - | - | - |
PriorityQueue_Binary | 500 | 83,268.9 ns | 1,482.10 ns | 1,386.36 ns | - | - | - | - |
PriorityQueue_Quad | 1000 | 197,737.6 ns | 3,428.66 ns | 2,863.09 ns | - | - | - | - |
PriorityQueue_Binary | 1000 | 176,310.1 ns | 3,443.65 ns | 4,229.11 ns | - | - | - | - |
PriorityQueue_Quad | 10000 | 2,588,508.4 ns | 50,021.64 ns | 55,598.92 ns | - | - | - | - |
PriorityQueue_Binary | 10000 | 2,312,044.9 ns | 44,753.63 ns | 43,954.04 ns | - | - | - | - |
PriorityQueue_Quad | 1000000 | 426,722,886.7 ns | 6,244,294.43 ns | 5,840,916.88 ns | - | - | - | - |
PriorityQueue_Binary | 1000000 | 406,408,665.1 ns | 8,071,721.73 ns | 14,961,453.78 ns | - | - | - | - |
While there seem to be performance gains, the key takeaway to me is that the particular binary heap implementation is actually faster than the equivalent quad heap, contradicting my earlier comment in this thread.
src/libraries/System.Collections/src/System/Collections/Generic/PriorityQueue.cs
Outdated
Show resolved
Hide resolved
src/libraries/System.Collections/src/System/Collections/Generic/PriorityQueue.cs
Outdated
Show resolved
Hide resolved
src/libraries/System.Collections/src/System/Collections/Generic/PriorityQueue.cs
Show resolved
Hide resolved
src/libraries/System.Collections/src/System/Collections/Generic/PriorityQueue.cs
Outdated
Show resolved
Hide resolved
src/libraries/System.Collections/src/System/Collections/Generic/PriorityQueue.cs
Outdated
Show resolved
Hide resolved
src/libraries/System.Collections/src/System/Collections/Generic/PriorityQueue.cs
Outdated
Show resolved
Hide resolved
src/libraries/System.Collections/src/System/Collections/Generic/PriorityQueue.cs
Outdated
Show resolved
Hide resolved
{ | ||
this.nodes = new List<(TElement, TPriority)>(); | ||
this.UnorderedItems = new UnorderedItemsCollection(this.nodes); | ||
this.Comparer = Comparer<TPriority>.Default; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@eiriktsarpalis, the use a separate IComparer<TPriority>
here rather than requiring TPriority : IComparable<TPriority>
(or even just requiring TElement : IComparable<TElement>
rather than having a separate priority) could have an impact on performance, e.g. for the latter if the TPriority is a struct, the JIT can likely devirtualize the interface calls, but for the former, every Compare operation is likely to be an interface dispatch. Maybe that'll get a bit better when guarded devirtualization arrives, but it'd be interesting to understand before we commit this design exactly how much we're leaving on the table.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the JIT optimize Comparer<TPriority>.Default.Compare()
calls and is this something we could use to our advantage? (e.g. use separate sift implementations depending on whether the user has supplied a custom comparer).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the JIT optimize Comparer.Default.Compare() calls and is this something we could use to our advantage?
Currently, Comparer<T>.Default.Compare(_value1, _value2)
will be better than _comparer.Compare(_value1, _value2)
but not as good as _value1.CompareTo(_value2)
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Requiring TPriority : IComparable<TPriority>
would probably be fine for most uses, but I don't particularly like adopting TElement : IComparable<TElement>
since it would result in more boilerplate for the element types.
Currently, Comparer.Default.Compare(_value1, _value2) will be [...] not as good as _value1.CompareTo(_value2).
Interesting, what is the main difference and what types does it concern? When I checked for numeric types it seemed to have applied all relevant optimizations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting, what is the main difference and what types does it concern? When I checked for numeric types it seemed to have applied all relevant optimizations.
using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Diagnosers;
using BenchmarkDotNet.Running;
using System;
using System.Collections.Generic;
public class Program
{
static void Main(string[] args) => BenchmarkSwitcher.FromAssemblies(new[] { typeof(Program).Assembly }).Run(args);
}
[DisassemblyDiagnoser(maxDepth: 3)]
[GenericTypeArguments(typeof(int))]
public class Tests<T> where T : IComparable<T>
{
private IComparer<T> _comparer = Comparer<T>.Default;
private T _a, _b;
[Benchmark]
public int ComparerInterface() => _comparer.Compare(_a, _b);
[Benchmark]
public int ComparerDirect() => Comparer<T>.Default.Compare(_a, _b);
[Benchmark]
public int CompareTo() => _a.CompareTo(_b);
}
Method | Mean |
---|---|
ComparerInterface | 1.8169 ns |
ComparerDirect | 1.3760 ns |
CompareTo | 0.5979 ns |
.NET Core 5.0 (CoreCLR 42.42.42.42424, CoreFX 42.42.42.42424), X64 RyuJIT
; Tests`1[[System.Int32, System.Private.CoreLib]].ComparerInterface()
mov [rsp+8],rcx
mov rcx,[rcx+8]
mov rdx,[rsp+8]
mov r8d,[rdx+14]
mov edx,[rdx+10]
mov r11,7FFD0F800638
mov rax,7FFD0F800638
mov rax,[rax]
jmp rax
; Total bytes of code 47
.NET Core 5.0 (CoreCLR 42.42.42.42424, CoreFX 42.42.42.42424), X64 RyuJIT
; Tests`1[[System.Int32, System.Private.CoreLib]].ComparerDirect()
mov edx,[rcx+10]
mov r8d,[rcx+14]
mov rcx,21332F32CD0
mov rcx,[rcx]
mov rax,offset System.Collections.Generic.GenericComparer`1[[System.Int32, System.Private.CoreLib]].Compare(Int32, Int32)
mov rax,[rax]
jmp rax
; Total bytes of code 36
; System.Collections.Generic.GenericComparer`1[[System.Int32, System.Private.CoreLib]].Compare(Int32, Int32)
cmp edx,r8d
jge short M01_L00
mov eax,0FFFFFFFF
jmp short M01_L02
M01_L00:
cmp edx,r8d
jle short M01_L01
mov eax,1
jmp short M01_L02
M01_L01:
xor eax,eax
M01_L02:
ret
; Total bytes of code 27
.NET Core 5.0 (CoreCLR 42.42.42.42424, CoreFX 42.42.42.42424), X64 RyuJIT
; Tests`1[[System.Int32, System.Private.CoreLib]].CompareTo()
lea rax,[rcx+10]
mov edx,[rcx+14]
cmp [rax],edx
jge short M00_L00
mov eax,0FFFFFFFF
jmp short M00_L02
M00_L00:
cmp [rax],edx
jle short M00_L01
mov eax,1
jmp short M00_L02
M00_L01:
xor eax,eax
M00_L02:
ret
; Total bytes of code 32
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@AndyAyersMS, your change in dotnet/coreclr@ccc5e17 made EqualityComparer<T>.Default.Equals
fully devirtualized and inlineable. Any thoughts on Comparer<T>.Default.Compare
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should just go ahead and change TPriority : IComparable<TPriority>
. It significantly improves performance and simplifies the API signature. Only real downsides is not supporting null
priorities and potential boilerplate when custom comparisons are required, but realistically neither seems likely to impact users. Should we just change it in this PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another disadvantage of TPriority : IComparable<TPriority>
is being inconsistent with e.g SortedSet<T>
and SortedDictionary<TKey,TValue>
. What's the impact of
potential boilerplate when custom comparisons are required
?
There is value in releasing the alpha version for testing by customers, in a way that is consistent with other data structures, i.e. with using IComparer
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the impact of
potential boilerplate when custom comparisons are required
?
I was thinking about users needing to define their own priority type with a custom IComparable<TPriority>
implementation if the existing types don't satisfy their needs. You cannot change the ordering of, say, strings without wrapping it into a custom type. In practice though this can be easily worked around depending on the use case.
There is value in releasing the alpha version for testing by customers, in a way that is consistent with other data structures, i.e. with using IComparer.
I would tend to agree in general, however the performance ramifications are too significant to ignore.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The JIT team are working on optimizing Comparer<T>.Default.CompareTo
calls, so the best course of action is to stick with the existing IComparer
design.
Cf. #39873
/// </summary> | ||
private readonly List<(TElement element, TPriority priority)> nodes; | ||
|
||
private const int RootIndex = 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I intended to avoid magic values in the code (represent them with constants instead) and make the code readable/verbose/clear even to someone who is very tired and drunk 😄 Is there a disadvantage to this approach?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's pretty obvious to most programmers that 0
represents the first index. If anything it makes reading the code harder since I now need to look up what RootIndex
is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not merely referring to the first index. We work with a quaternary tree that is implicitly represented in an array, the constant refers to the index in the underlying array that contains the root node of that tree. Far from obvious when all you have to work with is a number vs a named constant that tells you it's the root.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In any array heap the first index points to the root node if it exists. Not sure why this needs to be lifted to a constant.
src/libraries/System.Collections/src/System/Collections/Generic/PriorityQueue.cs
Outdated
Show resolved
Hide resolved
src/libraries/System.Collections/src/System/Collections/Generic/PriorityQueue.cs
Show resolved
Hide resolved
/// Gets the index of the last node in the heap. | ||
/// </summary> | ||
[MethodImpl(MethodImplOptions.AggressiveInlining)] | ||
private int GetLastNodeIndex() => this.nodes.Count - 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: inlining nodes.Count - 1
is pretty self explanatory. I don't think it justifies being factored out to a dedicated method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the cost of having a dedicated method outweigh the benefit of: 1) having a readable/verbose expression in the code; 2) deduplication of this expression from its usages into a single place?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How is GetLastNodeIndex()
more readable or less verbose than nodes.Count - 1
? If used with indexers you could simply write nodes[^1]
to get the last node.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You know it's subjective. I can change this if you insist and wouldn't approve the PR without this change. To me it's more readable and verbosity is an advantage here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes please. I would say it is pretty standard and we have examples in this codebase where it is being used exactly in that way, e.g.
runtime/src/libraries/System.Private.CoreLib/src/System/Collections/Generic/List.cs
Line 767 in d1d6ab1
return LastIndexOf(item, _size - 1, _size); |
return list[count - 1]; |
It's merely adapting the established
Why? This is the functionality we landed on during api review. It is pay-for-play and does not require any modifications on the underlying data structure. |
Thank you very much @eiriktsarpalis and @stephentoub for the review, much appreciated! ❤️ Asked a few questions. Regarding the method
I'm not sure I see the benefit of having this method to be honest. What value does this method add? |
Please refer to the link I shared on previous post for benefits. There's also an entry in wikipedia for supplementary reading. It should explain why structures like Queue or Stack don't need this kind of method. |
@@ -400,6 +400,7 @@ private void Remove(int indexOfNodeToRemove) | |||
var lastNode = _nodes[lastNodeIndex]; | |||
_size--; | |||
_version++; | |||
_nodes[lastNodeIndex] = default; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_nodes[lastNodeIndex] = default; | |
if (RuntimeHelpers.IsReferenceOrContainsReferences<(TElement, TPriority)>()) | |
{ | |
_nodes[lastNodeIndex] = default; | |
} |
private void SetCapacity(int capacity) | ||
{ | ||
Array.Resize(ref _nodes, capacity); | ||
_version++; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why would we need to increment _version
here? Resizing the buffer should not on its own invalidate existing enumerators. On the other hand, most methods calling SetCapacity
will already increment _version
if required.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's the existing behavior in Queue.SetCapacity
, excerpt from there:
private void SetCapacity(int capacity)
{
T[] newarray = new T[capacity];
if (_size > 0)
{
if (_head < _tail)
{
Array.Copy(_array, _head, newarray, 0, _size);
}
else
{
Array.Copy(_array, _head, newarray, 0, _array.Length - _head);
Array.Copy(_array, 0, newarray, _array.Length - _head, _tail);
}
}
_array = newarray;
_head = 0;
_tail = (_size == capacity) ? 0 : _size;
_version++;
}
This method (and consequently _version++
) impacts public methids TrimExcess
, Enqueue
in Queue
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In Dictionary.Generic.Tests.cs
, it's tested that EnsureCapacity
and TrimExcess
invalidate enumeration.
src/libraries/System.Collections/tests/Generic/PriorityQueue/PriorityQueue.Generic.Tests.cs
Show resolved
Hide resolved
src/libraries/System.Collections/tests/Generic/PriorityQueue/PriorityQueue.Generic.Tests.cs
Show resolved
Hide resolved
src/libraries/System.Collections/tests/Generic/PriorityQueue/PriorityQueue.Generic.Tests.cs
Show resolved
Hide resolved
src/libraries/System.Collections/tests/Generic/PriorityQueue/PriorityQueue.Generic.Tests.cs
Outdated
Show resolved
Hide resolved
src/libraries/System.Collections/tests/Generic/PriorityQueue/PriorityQueue.Generic.Tests.cs
Show resolved
Hide resolved
@eiriktsarpalis @stephentoub For the thread that emerged on a possible path to optimize performance of the data structure further, I'd like to ask to have this explored in a separate pull request. In particular, for anyone willing to investigate such opportunities, I'd like to highlight that potential benchmarks will be complex, as we'd need to explore multiple scenarios to be able to compare solutions. For example, in this paper that compares various heap types, there are ~40 scenarios considered, each with a different balance of data sets and heap operations benchmarked. |
src/libraries/System.Collections/src/System/Collections/Generic/PriorityQueue.cs
Outdated
Show resolved
Hide resolved
src/libraries/System.Collections/src/System/Collections/Generic/PriorityQueue.cs
Outdated
Show resolved
Hide resolved
src/libraries/System.Collections/src/System/Collections/Generic/PriorityQueue.cs
Outdated
Show resolved
Hide resolved
src/libraries/System.Collections/src/System/Collections/Generic/PriorityQueue.cs
Outdated
Show resolved
Hide resolved
src/libraries/System.Collections/src/System/Collections/Generic/PriorityQueue.cs
Outdated
Show resolved
Hide resolved
src/libraries/System.Collections/src/System/Collections/Generic/PriorityQueue.cs
Outdated
Show resolved
Hide resolved
src/libraries/System.Collections/src/System/Collections/Generic/PriorityQueue.cs
Outdated
Show resolved
Hide resolved
src/libraries/System.Collections/src/System/Collections/Generic/PriorityQueue.cs
Outdated
Show resolved
Hide resolved
src/libraries/System.Collections/src/System/Collections/Generic/PriorityQueue.cs
Outdated
Show resolved
Hide resolved
_index = EndOfEnumeration; | ||
} | ||
|
||
public bool MoveNext() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This implementation is still more complicated than it needs to be; I don't see the feedback below addressed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Keeping named constants to write self-documenting code, but simplified it. Hope you like it better now 😊
|
||
string actualElement = queue.EnqueueDequeue("one-not-to-enqueue", 1); | ||
|
||
Assert.Equal("one-not-to-enqueue", actualElement); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: the correctness of the assertion really depends on the implementation of SmallPriorityQueueFactory
.
Thank you for your contribution @pgolebiowski! |
Woo-Hoo! Thanks for the diligence and persistence, @pgolebiowski, and for the collaborative effort @eiriktsarpalis and @stephentoub. This feature will be included in .NET 6.0 Preview 2. Could one of you add a comment to dotnet/core#5889 for getting it included in the release notes for that Preview? |
This pull request adds a new data structure,
System.Collections.Generic.PriorityQueue
.The priority queue functionality has been long requested by the community, with the API discussion that lasted 5 years.