GH-148937: fix for free-threaded GC (RSS based defer)#148940
GH-148937: fix for free-threaded GC (RSS based defer)#148940nascheme wants to merge 8 commits intopython:mainfrom
Conversation
|
Note that this adds two extra stop/start-the-world points. We need STW to call the mimalloc APIs to compute the memory usage (iterating through arenas). We could likely consolidate one or both of these with existing STW points but I think it makes the code more complex. So I decided to keep it simple for now. I think we should backport this change to 3.14. |
ac09833 to
a853c00
Compare
Documentation build overview
35 files changed ·
|
|
Based on a suggestion from Sam, I changed it to instead estimate mimalloc memory use by counting full mimalloc pages. This requires a couple of changes to mimalloc itself but avoids the STW blocks and so should perform better. The accounting happens when a page transitions from non-full->full and so should have minimal performance overhead. |
|
Benchmark results from cyclotron. First compares 3.14.3t to this PR. Note that r-trash are mostly small. Table below this compared 3.13 (GIL, generational GC) with this PR. base=./py-3.14t/bin/python vs new=/home/nas/src/cpython/python
Uniform columns omitted: Legend (base vs new, matched by wl/cycle/extra/live/cyc%): base=/usr/bin/python3 vs new=/home/nas/src/cpython/python
|
colesbury
left a comment
There was a problem hiding this comment.
I'm most concerned about the logic for full_page_bytes:
- Abandoned/reclaimed pages (see comment below)
- I think we're missing counts for large/huge pages (MI_BIN_HUGE) that don't get marked as full.
| // own pool), so the counter stays valid across abandon/reclaim without any | ||
| // hand-off -- abandon and reclaim therefore have no hooks of their own. |
There was a problem hiding this comment.
I think there's still a problem here where we can double count or lose pages that are counted in full_page_bytes:
- Page becomes full
- Page is abandoned - now no longer marked as full but still counted in
full_page_bytes - Block is freed from page
- Page is reclaimed
- Block is allocated from page - now full and double counted
I think there's a lot of subtleties here with abandoned pages. I'm not entirely sure what the right approach is.
There was a problem hiding this comment.
I didn't find a good fix for that abandoned->freed->reclaimed hole. My hope is that doesn't happen too often and so if the GC runs a bit more often as a result, that's okay.
|
|
||
| // Total bytes (block_size * capacity) of pages currently in MI_BIN_FULL | ||
| // state whose pool association is this pool. | ||
| mi_decl_cache_align _Atomic(intptr_t) full_page_bytes; // = 0 |
There was a problem hiding this comment.
I'm worried about contention here because all full/not-full operations modify this shared variable. Repeated allocation/deallocation of a single block can cause the containing page to be repeatedly marked as full/not-full.
I'd prefer we do the counting in per-thread state instead of here, which is effectively per-interpreter state. We can add the total to a per-interpreter counter when the thread exits. That means gc_should_collect_mem_usage would need to loop over all thread states to get an estimate of the allocated bytes, but I think that's a worthwhile tradeoff.
There was a problem hiding this comment.
I added full_page_bytes to the mi_heap_t struct, which is per-thread. Putting it inside the Python thread state does work but it adds some extra complication.
Asking the OS for the process memory usage doesn't work will given how mimalloc works. It does not promptly return memory to the OS and so the memory doesn't drop after cyclic trash is freed. Instead of asking the OS, use mimalloc APIs to compute how much memory is being used by all mimalloc arenas. We need to stop-the-world to do this but usually we can avoid doing a collection. So, from a performance perspective, this is worth it.
It's probably better to call this inside of gc_collect_main(). That way, we are not doing the STW from inside _PyObject_GC_Link() function. This should have no significant performance impact since we hit this only after the young object count hits the threshold.
This avoids using STW in exchange for less accurate memory usage estimates.
Should should avoid memory contention. Avoid casting *intptr_t to *Py_ssize_t. Include large and huge pages in count (promote eagerly to MI_BIN_FULL). Add comment noting about abandoned pages potentially being lost (their byte count never being subtracted).
819a848 to
801737f
Compare
|
I modified the PR to handle mimalloc large and huge pages. That requires a bit a extra mimalloc change, which is a bit scary. So I think it would be better this gets into 3.15 for a while before we backport to 3.14 (assuming we think this approach is acceptable). The biggest remaining issue, IMO, is the abandoned->free leak. I'm thinking extra complication or slowdown we would pay to fix that is not worth it. |
Asking the OS for the process memory usage doesn't work well given how mimalloc works. It does not promptly return memory to the OS and so the memory doesn't drop after cyclic trash is freed.
Instead of asking the OS, use mimalloc APIs to compute how much memory is being used by all mimalloc arenas. We need to stop-the-world to do this but usually we can avoid doing a collection. So, from a performance perspective, this is worth it.
Tim Peters has a GC stress tester that quickly shows the issue, linked below. Before this fix, when I run this, the process RSS quickly goes up to 1 GB. After the fix, the RSS stays at about 100 MB. For comparision, the 3.13 GC keeps RSS at about 200 MB.
tim-gc-test.py
Benchmark results