Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GH-108362: Incremental GC implementation #108038

Merged
merged 17 commits into from
Feb 5, 2024

Conversation

markshannon
Copy link
Member

@markshannon markshannon commented Aug 16, 2023

Implements incremental cyclic GC.
Instead of traversing one generation on each collection, we traverse the young generation and the oldest part of the old generation. By traversing the old generation a chunk at a time, we keep pause times down a lot.

See faster-cpython/ideas#613 for the idea and algorithm.

@markshannon markshannon changed the title Incremental GC GH-108362: Incremental GC implementation Aug 23, 2023
@markshannon
Copy link
Member Author

Numbers for a recent commit, which is not tuned for performance, beyond using incremental collection.

Speedup: 3-4%
Relative pause times estimates, using objects visited as a proxy:

Collection Main This PR
Young 1 --
Incremental -- 3-4
Aging 9 --
Old 80 --

Shortest pauses go up, but no one cares about those.
It is throughput and longest pause times that matter.

Throughput is a few percent better, but that can also be achieved by increasing thresholds or only using one generation.
It is the longest pause time that is important and that is improved a lot.

The above numbers are from the pyperformance suite.

Stats: https://github.com/faster-cpython/benchmarking-public/blob/main/results/bm-20230813-3.13.0a0-328cfd4/bm-20230813-azure-x86_64-faster%252dcpython-incremental_gc-3.13.0a0-328cfd4-pystats.md

@markshannon markshannon marked this pull request as ready for review August 23, 2023 15:12
@markshannon
Copy link
Member Author

@pablogsal @nascheme want to take a look?

@pablogsal
Copy link
Member

pablogsal commented Aug 28, 2023

@pablogsal @nascheme want to take a look?

I can take a look this Thursday 👍

@markshannon
Copy link
Member Author

@pablogsal?

@pablogsal
Copy link
Member

pablogsal commented Sep 8, 2023

@pablogsal?

Hey Mark, sorry for the lack of review but unfortunately I had an accident last week where I broke my finger and required some surgery. Currently, I am recovering from the surgery and the injury. I will try to review it ASAP, but it may take a bit more time. Apologies for the delay

@markshannon
Copy link
Member Author

Take care.
There's no rush, we've got plenty of time before feature freeze.

@nascheme
Copy link
Member

nascheme commented Sep 8, 2023

My impression is this is a good idea. The long pause you can get from the current full collections could be quite undesirable, depending on your app. Regarding the statement that "It is guaranteed to collect all cycles eventually", I have some concern about what the worst case might be. E.g. if it collects eventually but takes 1 million runs of the GC to do it, that's not so great. This property sounds similar to what you get with the "Train Algorithm" for M&S style collection.

I suppose we don't want to provide an option to switch between the current "collect everything" and incremental approaches. We could probably turn on the incremental by default and then let people turn it off if they run into trouble. I guess the other solution would be to downgrade to an older Python version.

@markshannon
Copy link
Member Author

Regarding the statement that "It is guaranteed to collect all cycles eventually", I have some concern about what the worst case might be. E.g. if it collects eventually but takes 1 million runs of the GC to do it, that's not so great.

All garbage cycles present in the old generation will be collected in a single traversal of the old generation.
This is true because (ignoring the issue of finalizers):

  • Cycles are unreachable, so will never be modified during a traversal, regardless of how many increments it takes.
  • If an object is part of a cycle and that object is visited by an incremental collection, that cycle will be collected.
  • We visit all objects in the old generation before starting the next traversal.

Obviously how many incremental collections it takes to traverse the whole old generation depends on how big the old generation is, and how big the increments are.

@nascheme
Copy link
Member

If there is a garbage cycle with more than objects_per_collection contained in it, I don't see how it ever gets collected. A reference to an object from outside the collected set (e.g. not part of work) will make the object look alive to the GC. clear_cycles_and_restore_refcounts() gets called at the end of the incremental collection so I don't see how it ever gets collected.

@markshannon
Copy link
Member Author

If there is a garbage cycle with more than objects_per_collection contained in it, I don't see how it ever gets collected.

Choosing an increment is done depth first, so if part of a cycle is in an increment, all of it must be.
objects_per_collection is a guideline not a hard limit
faster-cpython/ideas#613 (comment)

@kumaraditya303 kumaraditya303 removed their request for review September 12, 2023 17:44
@nascheme
Copy link
Member

Choosing an increment is done depth first, so if part of a cycle is in an increment, all of it must be. objects_per_collection is a guideline not a hard limit

Oh, I see. In that case, if the collector encounters an object with many references, many more objects could be included in the collection. E.g. if you encounter sys.modules, you might examine basically all living objects. That's no worse than what's currently done with full collections but I do wonder in practice how much this incremental GC helps. My guess would be that most times you are only working on a subgraph but occasionally you will look at nearly all objects. Running should tests on real applications with big working sets could be informative.

@markshannon
Copy link
Member Author

In the worse case that all the objects form a giant cycle, there is no way to avoid visiting all objects.
I doubt that happens in practice, but if it does we are no worse off than doing a full GC.

We can get long pauses if a large number of objects are reachable from a single object that isn't part of a cycle.
This is more likely to be a problem, but it is also no worse than doing a full GC.

Because we track the number of objects seen, if we end up doing a large collection then we take a larger pause until the next one, so we do no more work. It is just done in bigger chunks.

Possible mitigations (for another PR)

At the start of the full cycle (after swapping the pending and scanning spaces) we could do a marking stage to mark live objects.
Marking requires less work per object than tentative deletion, so should lower the overhead.

Scanning the roots on the stack probably isn't a good idea as many of those could soon become garbage, but scanning sys.modules is probably a good idea.

Modules/gcmodule.c Outdated Show resolved Hide resolved
Modules/gcmodule.c Outdated Show resolved Hide resolved
Modules/gcmodule.c Outdated Show resolved Hide resolved
Modules/gcmodule.c Outdated Show resolved Hide resolved
Modules/gcmodule.c Outdated Show resolved Hide resolved
Modules/gcmodule.c Outdated Show resolved Hide resolved
Modules/gcmodule.c Outdated Show resolved Hide resolved
Modules/gcmodule.c Outdated Show resolved Hide resolved
Include/internal/pycore_gc.h Outdated Show resolved Hide resolved
@DinoV
Copy link
Contributor

DinoV commented Oct 7, 2023

I want to try one more thing (which is to simulate a large app with lots of modules, classes, functions...) just to see how that interacts with sys.modules and behaves with the transitive walk.

uintptr_t aging = cf->aging_space;
if (_PyObject_IS_GC(op)) {
PyGC_Head *gc = AS_GC(op);
if (_PyObject_GC_IS_TRACKED(op) &&
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think including objects from the other space can lead to some problematic behavior. If you have a large object graph which is referenced from smaller more frequently allocated objects this will continuously pull that large object graph in and blow the budget. This is basically what I was concerned with regarding sys.modules - you can have a module which imports sys, you can be creating lambda's in that module which are short lived but become cyclical trash, and to collect the lambda you need to traverse the world.

I also think given this behavior the need for two different lists of objects isn't really necessary, you could instead just move the objects to the end of the collecting space and we'd get back to them when we can, and I think the behavior would be identical to the existing algorithm (except maybe we'd pick up some extra objects when we'd flip the spaces).

I think another potential problem with this behavior is that you're not eagerly adding objects in the current space to this list transitively. That means if we visit a large object graph and blow the budget then we may not get to other transitively referenced objects that are in the space we're collecting from added to the container, and therefore despite collecting a huge object graph we still won't have collected enough to clear the cycle.

Below's a program that seems to grow unbounded, I had briefly experimented with using the _PyGC_PREV_MASK_COLLECTING flag here to mark objects we want to include instead of using the aging space, but that also didn't work (I would have expected on some collections we get a bunch of these little cycles, and then after a flip we need to walk the large object graph once), so I'm not certain what exactly needs to be done to fix this.

class LinkedList:
    def __init__(self, next=None, prev=None):
        self.next = next
        if next is not None:
            next.prev = self
        self.prev = prev
        if prev is not None:
            prev.next = self


def make_ll(depth):
    head = LinkedList()
    for i in range(depth):
        head = LinkedList(head, head.prev)
    return head


head = make_ll(10000)

while True:
    newhead = make_ll(200)
    newhead.surprise = head

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any cycle present in the old generation at the start of the full scan (when we flip spaces) will be collected during that scan (before we flip spaces again) See #108038 (comment)

There will always be object graphs that perform badly, but all cycles will be collected eventually (assume the program runs for long enough).

This program looks like most cycles created will be handled in the young collection, so I don't see a problem there, but if we increase depth so that the cycles will outlive the young collection, then it might take a while to collect the cycles, and the first increment will likely have to visit all the nodes reachable from the global variable head.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess I wasn't clear enough (and maybe you didn't run it) because this program actually runs out of memory :) You're right that most cycles should be collected in young, but I'm guessing one survives every young collection, and those build up and are uncollectible. I think we probably only successfully collect 1 of the non-long lived objects every collection because we repeatedly re-process the long-lived linked list.

If I modify the program to hold onto 100 of these at a time before clearing them out it runs out of memory even faster.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm seeing the memory use grow slowly, although seemingly without bounds. So something is wrong.

There are two spaces in the old generation. aging and oldest.
After a young or incremental collection we add survivors to the end of aging.
We only collect cycles within the oldest space. After a flip, all objects will be in the oldest space, so if there are any cycles they will be collected, not moved back to the aging space.

I modified your program to include gc.set_threshold(2000, 2, 0) which makes the incremental collector process objects five times as fast, in which case the memory appears to stays bounded.

I was hoping to merge this and then play with thresholds, but it looks like we will need some sort of adaptive threshold before merging.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You only collect cycles in the oldest space but the reason I placed this comment here is that you do gather the transitive closure from the aging space. Therefore think the statement "We only collect cycles within the oldest space." is incorrect given this code - once you've included a single object from aging you will consider its transitive closure as well.

But including these objects seems like it should be unnecessary though, once you flip, you'll re-consider those objects and their transitive closure.

And as I said before I think this basically eliminates any usefulness of the two spaces... you may as well just be moving the objects to the end of the oldest space if you're willing to suck them into the transitive closure.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you think we collect objects from the aging space?
That is not the intention, and I don't see where that happens.

Modules/gcmodule.c Outdated Show resolved Hide resolved
@markshannon
Copy link
Member Author

markshannon commented Oct 13, 2023

With this code:

class LinkedList:
    def __init__(self, next=None, prev=None):
        self.next = next
        if next is not None:
            next.prev = self
        self.prev = prev
        if prev is not None:
            prev.next = self


def make_ll(depth):
    head = LinkedList()
    for i in range(depth):
        head = LinkedList(head, head.prev)
    return head

import gc
#gc.set_threshold(2000, 2, 0)

M = 10_000
N = 5_000

head = make_ll(M)
count = M

next_count = 1_000_000
while True:
    newhead = make_ll(N)
    newhead.surprise = head
    count += N
    if count >= next_count:
        print(f"Count = {int(count/1_000_000)}M")
        print(gc.get_stats()[:2])
        next_count += 1_000_000

I've upped the size of the the lists, so that they aren't collected by the young collection.
The memory grows unless the line #gc.set_threshold(2000, 2, 0) is commented out, in which case memory stays bounded as the incremental collector is able to keep up.

@DinoV
Copy link
Contributor

DinoV commented Oct 13, 2023

FWIW this variation grows unbounded even with a greatly increased threshold (although slowly, but I killed it after getting to 10g), but maybe there's some amount of auto-tuning where it would keep up. On a short run it also seems to be spending ~25% time in gc_collect_region as per perf record on Linux:

import gc
gc.set_threshold(200000, 2, 0)

class LinkedList:
    def __init__(self, next=None, prev=None):
        self.next = next
        if next is not None:
            next.prev = self
        self.prev = prev
        if prev is not None:
            prev.next = self


def make_ll(depth):
    head = LinkedList()
    for i in range(depth):
        head = LinkedList(head, head.prev)
    return head


head = make_ll(10000)

olds = []
while True:
    newhead = make_ll(200)
    newhead.surprise = head
    olds.append(newhead)
    if len(olds) == 100:
        print('clearing')
        del olds[:]

@markshannon
Copy link
Member Author

The first threshold just determines how often a collection is done. It shouldn't really impact whether the collector can keep up.
It is the second threshold that matters. If too high the collector might not be able to keep up. It should always be able to keep up when set to 2. I'll try to investigate.

Since this program is doing nothing but producing cyclic garbage, I'm not surprised that it spends a lot of time in GC. Is it worse than the current GC?

@markshannon
Copy link
Member Author

I am seeing much the same behavior on 3.11 and main in terms of the count of objects being collected.
Have you tried your test program on 3.11 or main?
It is possible that we are getting worse fragmentation.

@DinoV
Copy link
Contributor

DinoV commented Oct 23, 2023

Since this program is doing nothing but producing cyclic garbage, I'm not surprised that it spends a lot of time in GC. Is it worse than the current GC?

Ahh I hadn't compared the CPU time and indeed the baseline GC is spending as much time in GC, so never mind on the time spent :)

@DinoV
Copy link
Contributor

DinoV commented Oct 23, 2023

I am seeing much the same behavior on 3.11 and main in terms of the count of objects being collected. Have you tried your test program on 3.11 or main? It is possible that we are getting worse fragmentation.

I haven't been looking at the collection statistics but rather memory usage. On the most recent program I see main staying at around 15mb resident and I see the incremental GC version growing unbounded (it reached 1gig after ~2.5 minutes, 2gb after ~5 minutes, and over 3gb after 10 minutes).

@markshannon
Copy link
Member Author

With your latest example, the stats show the leak as well.
I've no idea why as yet, but I will investigate.

Copy link
Contributor

@DinoV DinoV left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Python/gc.c Outdated Show resolved Hide resolved
@markshannon markshannon merged commit 36518e6 into python:main Feb 5, 2024
34 checks passed
@markshannon markshannon deleted the incremental-gc branch February 5, 2024 18:28
@bedevere-bot
Copy link

⚠️⚠️⚠️ Buildbot failure ⚠️⚠️⚠️

Hi! The buildbot s390x Debian 3.x has failed when building commit 36518e6.

What do you need to do:

  1. Don't panic.
  2. Check the buildbot page in the devguide if you don't know what the buildbots are or how they work.
  3. Go to the page of the buildbot that failed (https://buildbot.python.org/all/#builders/49/builds/7912) and take a look at the build logs.
  4. Check if the failure is related to this commit (36518e6) or if it is a false positive.
  5. If the failure is related to this commit, please, reflect that on the issue and make a new Pull Request with a fix.

You can take a look at the buildbot page here:

https://buildbot.python.org/all/#builders/49/builds/7912

Failed tests:

  • test.test_multiprocessing_spawn.test_processes
  • test.test_multiprocessing_forkserver.test_processes
  • test.test_multiprocessing_fork.test_processes

Summary of the results of the build (if available):

==

Click to see traceback logs
remote: Enumerating objects: 30, done.        
remote: Counting objects:   3% (1/29)        
remote: Counting objects:   6% (2/29)        
remote: Counting objects:  10% (3/29)        
remote: Counting objects:  13% (4/29)        
remote: Counting objects:  17% (5/29)        
remote: Counting objects:  20% (6/29)        
remote: Counting objects:  24% (7/29)        
remote: Counting objects:  27% (8/29)        
remote: Counting objects:  31% (9/29)        
remote: Counting objects:  34% (10/29)        
remote: Counting objects:  37% (11/29)        
remote: Counting objects:  41% (12/29)        
remote: Counting objects:  44% (13/29)        
remote: Counting objects:  48% (14/29)        
remote: Counting objects:  51% (15/29)        
remote: Counting objects:  55% (16/29)        
remote: Counting objects:  58% (17/29)        
remote: Counting objects:  62% (18/29)        
remote: Counting objects:  65% (19/29)        
remote: Counting objects:  68% (20/29)        
remote: Counting objects:  72% (21/29)        
remote: Counting objects:  75% (22/29)        
remote: Counting objects:  79% (23/29)        
remote: Counting objects:  82% (24/29)        
remote: Counting objects:  86% (25/29)        
remote: Counting objects:  89% (26/29)        
remote: Counting objects:  93% (27/29)        
remote: Counting objects:  96% (28/29)        
remote: Counting objects: 100% (29/29)        
remote: Counting objects: 100% (29/29), done.        
remote: Compressing objects:   5% (1/20)        
remote: Compressing objects:  10% (2/20)        
remote: Compressing objects:  15% (3/20)        
remote: Compressing objects:  20% (4/20)        
remote: Compressing objects:  25% (5/20)        
remote: Compressing objects:  30% (6/20)        
remote: Compressing objects:  35% (7/20)        
remote: Compressing objects:  40% (8/20)        
remote: Compressing objects:  45% (9/20)        
remote: Compressing objects:  50% (10/20)        
remote: Compressing objects:  55% (11/20)        
remote: Compressing objects:  60% (12/20)        
remote: Compressing objects:  65% (13/20)        
remote: Compressing objects:  70% (14/20)        
remote: Compressing objects:  75% (15/20)        
remote: Compressing objects:  80% (16/20)        
remote: Compressing objects:  85% (17/20)        
remote: Compressing objects:  90% (18/20)        
remote: Compressing objects:  95% (19/20)        
remote: Compressing objects: 100% (20/20)        
remote: Compressing objects: 100% (20/20), done.        
remote: Total 30 (delta 8), reused 10 (delta 8), pack-reused 1        
From https://github.com/python/cpython
 * branch                  main       -> FETCH_HEAD
Note: switching to '36518e69d74607e5f094ce55286188e4545a947d'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:

  git switch -c <new-branch-name>

Or undo this operation with:

  git switch -

Turn off this advice by setting config variable advice.detachedHead to false

HEAD is now at 36518e69d7 GH-108362: Incremental GC implementation (GH-108038)
Switched to and reset branch 'main'

make: *** [Makefile:2099: buildbottest] Error 2

@bedevere-bot
Copy link

⚠️⚠️⚠️ Buildbot failure ⚠️⚠️⚠️

Hi! The buildbot s390x SLES 3.x has failed when building commit 36518e6.

What do you need to do:

  1. Don't panic.
  2. Check the buildbot page in the devguide if you don't know what the buildbots are or how they work.
  3. Go to the page of the buildbot that failed (https://buildbot.python.org/all/#builders/540/builds/7870) and take a look at the build logs.
  4. Check if the failure is related to this commit (36518e6) or if it is a false positive.
  5. If the failure is related to this commit, please, reflect that on the issue and make a new Pull Request with a fix.

You can take a look at the buildbot page here:

https://buildbot.python.org/all/#builders/540/builds/7870

Failed tests:

  • test.test_multiprocessing_forkserver.test_processes
  • test.test_multiprocessing_fork.test_processes

Summary of the results of the build (if available):

==

Click to see traceback logs
remote: Enumerating objects: 30, done.        
remote: Counting objects:   3% (1/29)        
remote: Counting objects:   6% (2/29)        
remote: Counting objects:  10% (3/29)        
remote: Counting objects:  13% (4/29)        
remote: Counting objects:  17% (5/29)        
remote: Counting objects:  20% (6/29)        
remote: Counting objects:  24% (7/29)        
remote: Counting objects:  27% (8/29)        
remote: Counting objects:  31% (9/29)        
remote: Counting objects:  34% (10/29)        
remote: Counting objects:  37% (11/29)        
remote: Counting objects:  41% (12/29)        
remote: Counting objects:  44% (13/29)        
remote: Counting objects:  48% (14/29)        
remote: Counting objects:  51% (15/29)        
remote: Counting objects:  55% (16/29)        
remote: Counting objects:  58% (17/29)        
remote: Counting objects:  62% (18/29)        
remote: Counting objects:  65% (19/29)        
remote: Counting objects:  68% (20/29)        
remote: Counting objects:  72% (21/29)        
remote: Counting objects:  75% (22/29)        
remote: Counting objects:  79% (23/29)        
remote: Counting objects:  82% (24/29)        
remote: Counting objects:  86% (25/29)        
remote: Counting objects:  89% (26/29)        
remote: Counting objects:  93% (27/29)        
remote: Counting objects:  96% (28/29)        
remote: Counting objects: 100% (29/29)        
remote: Counting objects: 100% (29/29), done.        
remote: Compressing objects:   5% (1/20)        
remote: Compressing objects:  10% (2/20)        
remote: Compressing objects:  15% (3/20)        
remote: Compressing objects:  20% (4/20)        
remote: Compressing objects:  25% (5/20)        
remote: Compressing objects:  30% (6/20)        
remote: Compressing objects:  35% (7/20)        
remote: Compressing objects:  40% (8/20)        
remote: Compressing objects:  45% (9/20)        
remote: Compressing objects:  50% (10/20)        
remote: Compressing objects:  55% (11/20)        
remote: Compressing objects:  60% (12/20)        
remote: Compressing objects:  65% (13/20)        
remote: Compressing objects:  70% (14/20)        
remote: Compressing objects:  75% (15/20)        
remote: Compressing objects:  80% (16/20)        
remote: Compressing objects:  85% (17/20)        
remote: Compressing objects:  90% (18/20)        
remote: Compressing objects:  95% (19/20)        
remote: Compressing objects: 100% (20/20)        
remote: Compressing objects: 100% (20/20), done.        
remote: Total 30 (delta 8), reused 10 (delta 8), pack-reused 1        
From https://github.com/python/cpython
 * branch                  main       -> FETCH_HEAD
Note: switching to '36518e69d74607e5f094ce55286188e4545a947d'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:

  git switch -c <new-branch-name>

Or undo this operation with:

  git switch -

Turn off this advice by setting config variable advice.detachedHead to false

HEAD is now at 36518e69d7 GH-108362: Incremental GC implementation (GH-108038)
Switched to and reset branch 'main'

make: *** [Makefile:2096: buildbottest] Error 2

@bedevere-bot
Copy link

⚠️⚠️⚠️ Buildbot failure ⚠️⚠️⚠️

Hi! The buildbot s390x RHEL7 3.x has failed when building commit 36518e6.

What do you need to do:

  1. Don't panic.
  2. Check the buildbot page in the devguide if you don't know what the buildbots are or how they work.
  3. Go to the page of the buildbot that failed (https://buildbot.python.org/all/#builders/179/builds/6519) and take a look at the build logs.
  4. Check if the failure is related to this commit (36518e6) or if it is a false positive.
  5. If the failure is related to this commit, please, reflect that on the issue and make a new Pull Request with a fix.

You can take a look at the buildbot page here:

https://buildbot.python.org/all/#builders/179/builds/6519

Failed tests:

  • test.test_multiprocessing_spawn.test_processes
  • test.test_multiprocessing_forkserver.test_processes
  • test.test_multiprocessing_fork.test_processes

Summary of the results of the build (if available):

==

Click to see traceback logs
remote: Enumerating objects: 35, done.�[K
remote: Counting objects:   2% (1/34)�[K
remote: Counting objects:   5% (2/34)�[K
remote: Counting objects:   8% (3/34)�[K
remote: Counting objects:  11% (4/34)�[K
remote: Counting objects:  14% (5/34)�[K
remote: Counting objects:  17% (6/34)�[K
remote: Counting objects:  20% (7/34)�[K
remote: Counting objects:  23% (8/34)�[K
remote: Counting objects:  26% (9/34)�[K
remote: Counting objects:  29% (10/34)�[K
remote: Counting objects:  32% (11/34)�[K
remote: Counting objects:  35% (12/34)�[K
remote: Counting objects:  38% (13/34)�[K
remote: Counting objects:  41% (14/34)�[K
remote: Counting objects:  44% (15/34)�[K
remote: Counting objects:  47% (16/34)�[K
remote: Counting objects:  50% (17/34)�[K
remote: Counting objects:  52% (18/34)�[K
remote: Counting objects:  55% (19/34)�[K
remote: Counting objects:  58% (20/34)�[K
remote: Counting objects:  61% (21/34)�[K
remote: Counting objects:  64% (22/34)�[K
remote: Counting objects:  67% (23/34)�[K
remote: Counting objects:  70% (24/34)�[K
remote: Counting objects:  73% (25/34)�[K
remote: Counting objects:  76% (26/34)�[K
remote: Counting objects:  79% (27/34)�[K
remote: Counting objects:  82% (28/34)�[K
remote: Counting objects:  85% (29/34)�[K
remote: Counting objects:  88% (30/34)�[K
remote: Counting objects:  91% (31/34)�[K
remote: Counting objects:  94% (32/34)�[K
remote: Counting objects:  97% (33/34)�[K
remote: Counting objects: 100% (34/34)�[K
remote: Counting objects: 100% (34/34), done.�[K
remote: Compressing objects:   4% (1/25)�[K
remote: Compressing objects:   8% (2/25)�[K
remote: Compressing objects:  12% (3/25)�[K
remote: Compressing objects:  16% (4/25)�[K
remote: Compressing objects:  20% (5/25)�[K
remote: Compressing objects:  24% (6/25)�[K
remote: Compressing objects:  28% (7/25)�[K
remote: Compressing objects:  32% (8/25)�[K
remote: Compressing objects:  36% (9/25)�[K
remote: Compressing objects:  40% (10/25)�[K
remote: Compressing objects:  44% (11/25)�[K
remote: Compressing objects:  48% (12/25)�[K
remote: Compressing objects:  52% (13/25)�[K
remote: Compressing objects:  56% (14/25)�[K
remote: Compressing objects:  60% (15/25)�[K
remote: Compressing objects:  64% (16/25)�[K
remote: Compressing objects:  68% (17/25)�[K
remote: Compressing objects:  72% (18/25)�[K
remote: Compressing objects:  76% (19/25)�[K
remote: Compressing objects:  80% (20/25)�[K
remote: Compressing objects:  84% (21/25)�[K
remote: Compressing objects:  88% (22/25)�[K
remote: Compressing objects:  92% (23/25)�[K
remote: Compressing objects:  96% (24/25)�[K
remote: Compressing objects: 100% (25/25)�[K
remote: Compressing objects: 100% (25/25), done.�[K
remote: Total 35 (delta 12), reused 10 (delta 8), pack-reused 1�[K
From https://github.com/python/cpython
 * branch            main       -> FETCH_HEAD
Note: checking out '36518e69d74607e5f094ce55286188e4545a947d'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b new_branch_name

HEAD is now at 36518e6... GH-108362: Incremental GC implementation (GH-108038)
Switched to and reset branch 'main'

Objects/unicodeobject.c: In function ‘unicode_endswith’:
Objects/unicodeobject.c:13043:23: warning: ‘subobj’ may be used uninitialized in this function [-Wmaybe-uninitialized]
             substring = PyTuple_GET_ITEM(subobj, i);
                       ^
Objects/unicodeobject.c: In function ‘unicode_startswith’:
Objects/unicodeobject.c:12989:23: warning: ‘subobj’ may be used uninitialized in this function [-Wmaybe-uninitialized]
             substring = PyTuple_GET_ITEM(subobj, i);
                       ^
Python/instrumentation.c: In function ‘allocate_instrumentation_data’:
Python/instrumentation.c:1489:9: warning: missing braces around initializer [-Wmissing-braces]
         code->_co_monitoring->local_monitors = (_Py_LocalMonitors){ 0 };
         ^
Python/instrumentation.c:1489:9: warning: (near initialization for ‘(anonymous).tools’) [-Wmissing-braces]
Python/instrumentation.c:1490:9: warning: missing braces around initializer [-Wmissing-braces]
         code->_co_monitoring->active_monitors = (_Py_LocalMonitors){ 0 };
         ^
Python/instrumentation.c:1490:9: warning: (near initialization for ‘(anonymous).tools’) [-Wmissing-braces]
./Modules/_xxinterpchannelsmodule.c: In function ‘_channel_get_info’:
./Modules/_xxinterpchannelsmodule.c:1984:21: warning: missing braces around initializer [-Wmissing-braces]
     *info = (struct channel_info){0};
                     ^
./Modules/_xxinterpchannelsmodule.c:1984:21: warning: (near initialization for ‘(anonymous).status’) [-Wmissing-braces]

make: *** [buildbottest] Error 2

@bedevere-bot
Copy link

⚠️⚠️⚠️ Buildbot failure ⚠️⚠️⚠️

Hi! The buildbot AMD64 Debian root 3.x has failed when building commit 36518e6.

What do you need to do:

  1. Don't panic.
  2. Check the buildbot page in the devguide if you don't know what the buildbots are or how they work.
  3. Go to the page of the buildbot that failed (https://buildbot.python.org/all/#builders/345/builds/7026) and take a look at the build logs.
  4. Check if the failure is related to this commit (36518e6) or if it is a false positive.
  5. If the failure is related to this commit, please, reflect that on the issue and make a new Pull Request with a fix.

You can take a look at the buildbot page here:

https://buildbot.python.org/all/#builders/345/builds/7026

Failed tests:

  • test.test_multiprocessing_spawn.test_processes
  • test.test_multiprocessing_forkserver.test_processes

Summary of the results of the build (if available):

==

Click to see traceback logs
remote: Enumerating objects: 30, done.        
remote: Counting objects:   3% (1/29)        
remote: Counting objects:   6% (2/29)        
remote: Counting objects:  10% (3/29)        
remote: Counting objects:  13% (4/29)        
remote: Counting objects:  17% (5/29)        
remote: Counting objects:  20% (6/29)        
remote: Counting objects:  24% (7/29)        
remote: Counting objects:  27% (8/29)        
remote: Counting objects:  31% (9/29)        
remote: Counting objects:  34% (10/29)        
remote: Counting objects:  37% (11/29)        
remote: Counting objects:  41% (12/29)        
remote: Counting objects:  44% (13/29)        
remote: Counting objects:  48% (14/29)        
remote: Counting objects:  51% (15/29)        
remote: Counting objects:  55% (16/29)        
remote: Counting objects:  58% (17/29)        
remote: Counting objects:  62% (18/29)        
remote: Counting objects:  65% (19/29)        
remote: Counting objects:  68% (20/29)        
remote: Counting objects:  72% (21/29)        
remote: Counting objects:  75% (22/29)        
remote: Counting objects:  79% (23/29)        
remote: Counting objects:  82% (24/29)        
remote: Counting objects:  86% (25/29)        
remote: Counting objects:  89% (26/29)        
remote: Counting objects:  93% (27/29)        
remote: Counting objects:  96% (28/29)        
remote: Counting objects: 100% (29/29)        
remote: Counting objects: 100% (29/29), done.        
remote: Compressing objects:   5% (1/20)        
remote: Compressing objects:  10% (2/20)        
remote: Compressing objects:  15% (3/20)        
remote: Compressing objects:  20% (4/20)        
remote: Compressing objects:  25% (5/20)        
remote: Compressing objects:  30% (6/20)        
remote: Compressing objects:  35% (7/20)        
remote: Compressing objects:  40% (8/20)        
remote: Compressing objects:  45% (9/20)        
remote: Compressing objects:  50% (10/20)        
remote: Compressing objects:  55% (11/20)        
remote: Compressing objects:  60% (12/20)        
remote: Compressing objects:  65% (13/20)        
remote: Compressing objects:  70% (14/20)        
remote: Compressing objects:  75% (15/20)        
remote: Compressing objects:  80% (16/20)        
remote: Compressing objects:  85% (17/20)        
remote: Compressing objects:  90% (18/20)        
remote: Compressing objects:  95% (19/20)        
remote: Compressing objects: 100% (20/20)        
remote: Compressing objects: 100% (20/20), done.        
remote: Total 30 (delta 8), reused 10 (delta 8), pack-reused 1        
From https://github.com/python/cpython
 * branch                  main       -> FETCH_HEAD
Note: switching to '36518e69d74607e5f094ce55286188e4545a947d'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:

  git switch -c <new-branch-name>

Or undo this operation with:

  git switch -

Turn off this advice by setting config variable advice.detachedHead to false

HEAD is now at 36518e69d7 GH-108362: Incremental GC implementation (GH-108038)
Switched to and reset branch 'main'

configure: WARNING: pkg-config is missing. Some dependencies may not be detected correctly.

make: *** [Makefile:2095: buildbottest] Error 2

@bedevere-bot
Copy link

⚠️⚠️⚠️ Buildbot failure ⚠️⚠️⚠️

Hi! The buildbot AMD64 FreeBSD 3.x has failed when building commit 36518e6.

What do you need to do:

  1. Don't panic.
  2. Check the buildbot page in the devguide if you don't know what the buildbots are or how they work.
  3. Go to the page of the buildbot that failed (https://buildbot.python.org/all/#builders/1223/builds/1847) and take a look at the build logs.
  4. Check if the failure is related to this commit (36518e6) or if it is a false positive.
  5. If the failure is related to this commit, please, reflect that on the issue and make a new Pull Request with a fix.

You can take a look at the buildbot page here:

https://buildbot.python.org/all/#builders/1223/builds/1847

Failed tests:

  • test.test_multiprocessing_forkserver.test_processes
  • test.test_multiprocessing_spawn.test_processes
  • test.test_multiprocessing_fork.test_processes

Summary of the results of the build (if available):

==

Click to see traceback logs
Note: switching to '36518e69d74607e5f094ce55286188e4545a947d'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:

  git switch -c <new-branch-name>

Or undo this operation with:

  git switch -

Turn off this advice by setting config variable advice.detachedHead to false

HEAD is now at 36518e69d7 GH-108362: Incremental GC implementation (GH-108038)
Switched to and reset branch 'main'

@markshannon markshannon restored the incremental-gc branch February 6, 2024 18:20
@markshannon markshannon deleted the incremental-gc branch February 6, 2024 18:20
@vstinner
Copy link
Member

vstinner commented Feb 7, 2024

See issue gh-115124: AMD64 Windows11 Bigmem 3.x: test_bigmem failed with !_Py_IsImmortal(FROM_GC(gc)) assertion error. PR #114931 or PR #108038 caused a regression.

@vstinner
Copy link
Member

vstinner commented Feb 7, 2024

See issue gh-115127: multiprocessing test_thread_safety() fails with "gc_list_is_empty(to) || gc_old_space(to_tail) == gc_old_space(from_tail)" assert error.

markshannon added a commit to faster-cpython/cpython that referenced this pull request Feb 7, 2024
@markshannon markshannon restored the incremental-gc branch February 7, 2024 09:55
pablogsal pushed a commit that referenced this pull request Feb 7, 2024
…" (#115132)

Revert "GH-108362: Incremental GC implementation (GH-108038)"

This reverts commit 36518e6.
fsc-eriker pushed a commit to fsc-eriker/cpython that referenced this pull request Feb 14, 2024
fsc-eriker pushed a commit to fsc-eriker/cpython that referenced this pull request Feb 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants