Move all cuda.core.system enums into cuda.core.system.typing#2022
Move all cuda.core.system enums into cuda.core.system.typing#2022mdboom merged 13 commits intoNVIDIA:mainfrom
Conversation
This comment has been minimized.
This comment has been minimized.
|
I have a naming concern: When I see |
@leofang: What do you think? I know you sort of see enums as a type-checking feature (though they are a bit more than that). I'm on the fence. If we change here we should also change |
|
This is a pure agent review. I'm posting it here before drilling down myself, for visibility b/o our approaching deadlines. Cusor GPT-5.4 Extra High Fast Findings
Open Questions
Change Summary
|
This is totally fine. The enum in question (
This was on purpose. It was unintentionally public before. It is now private like all other "helper" classes in
Good catch. Fixed.
This is fine. The sort of import hooks this once used are pretty broken by our "megapackage" approach. I think it's good enough to just declare the places where we might find public enums.
Yes, this is required to get around a cyclical import issue.
Yes. |
|
Logging a somewhat unusual observation: https://github.com/NVIDIA/cuda-python/actions/runs/25394895050/job/74482106980?pr=2022 failed without logging any error message: Agent take: |
This comment has been minimized.
This comment has been minimized.
|
This is a follow-on to #2014, and based on a comment in #2016 that all of these new enums should go in a separate
typingmodule dedicated to this and type annotations.For
cuda.core.system, we decided to put the enums incuda.core.system.typingrather thancuda.core.typingbecausecuda.core.systemis deliberately designed to be a little bit independent of CUDA. (It could become its own package someday, or be under a different namespace etc.)This also addresses a few small bugs in the
test_enum_coverage.pytests that were discovered while working on #2016. Otherwise, this PR is exclusively moving content and updating imports and doc references accordingly.