-
-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
compiler-rt: Provide __cpu_indicator_init, __cpu_model and __cpu_features2 #20081
base: master
Are you sure you want to change the base?
Conversation
#1018 ? |
Yep that was it. Thanks. |
Missing support for these symbols is a blocker for compiling error: ld.lld: undefined symbol: __cpu_indicator_init
note: referenced by gc-get.c:584 (src/gc-get.c:584)
note: mdbx-static.o:(scan4seq_resolver) in archive [...]/libmdbx.a
error: ld.lld: undefined symbol: __cpu_model
note: referenced by gc-get.c:587 (src/gc-get.c:587)
note: mdbx-static.o:(scan4seq_resolver) in archive [...]/libmdbx.a I really appreciate this PR and hope it gets merged soon. 🫡 |
As a temporary solution, you can use cpu_model, which provides __cpu_indicator_init and __cpu_model symbols. |
577394c
to
167cb40
Compare
Rebased to latest master. |
167cb40
to
a481fa8
Compare
Rebased again . I'm not entirely happy with |
Is it actually problematic if the standard library |
172a879
to
edca94a
Compare
Made a new commit with regards to this. I moved the enum definition file into lib/std/zig/system and modified some of lib/std/zig/system/x86.zig functions to return the Vendor, Type and Subtype enums as part of detection in addition to the pointer to Target.Cpu.Model. Some of these decls are also exposed so that the compiler-rt code can use them. Now, the only logic in the compiler-rt file simply converts between Target.Cpu.Feature.Set and the actual values expected by LLVM for the |
44ae1fb
to
d793f18
Compare
@alexrp are you interested in shepherding this one? |
I do want a final review because it looks very messy |
if (!arch.isX86()) return; | ||
|
||
const abi = target.result.abi; | ||
if (target.result.ofmt != .elf or !(abi.isMusl() or abi.isGnu())) return; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why limit to these targets?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't exactly remember why. It must have been from whatever standalone test I originally copied from. I'll remove it and see if anything breaks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I see. On macOS:
/Users/runner/work/zig/zig/build/../lib/std/Build/Step/CheckObject.zig:506:17: 0x10a7671e1 in checkInDynamicSymtab (build)
else => @panic("Unsupported target platform"),
But really, since the test is now actually testing multiversioning at runtime, is there any value in doing these symbol checks anyway?
These match what LLVM expects and generates when building multiversioned functions (e.g. functions tagged with attributes target or target_clones). The actual CPU detection reuses the logic under std/zig/system/x86.zig, and transforms it into the values that LLVM-generated code expects. These values are auto-generated using a new tool (tools/update_x86_cpu_model_enums.zig) which parses the relevant file in the LLVM codebase and generates Zig enums out of it.
- Use pointers in export builtin - Update names for Type enum tag
The new logic is closer to how it's done in C where any unspecified enum simply increments the previous value.
This lets us reuse the existing cpu detection in std.zig.system.x86. Hence, lib/compiler_rt/x86_cpu_model.zig simply translates the detected Target.Cpu.Feature.Set into the features set expected by LLVM.
Checks the exit code which depends on the running cpu feature set.
Move the functionality and code to update_cpu_features.
Instead of checking this at comptime, they are checked at the point of generation in tools/update_cpu_features.zig.
d793f18
to
530463d
Compare
if (std.Target.x86.featureSetHas(cpu.features, .avx512vnni)) { | ||
run_exe.expectExitCode(3); | ||
} else if (std.Target.x86.featureSetHas(cpu.features, .avx2)) { | ||
run_exe.expectExitCode(2); | ||
} else { | ||
run_exe.expectExitCode(1); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This'll work for most cases but it isn't quite correct: Multiversioning will dynamically detect the features of the system the program is run on. The current approach breaks if I invoke this test manually with -Dcpu=...
. So really, you should be checking b.graph.host
features here.
Fixes #18074, original PR was #18193.
This PR adds support for X86/X86_64 CPU model detection and function multiversioning in C/C++ via the target and target_clones attributes (compatible with LLVM's, which in turn is compatible with libgcc's). It adds
__cpu_indicator_init
,__cpu_model
and__cpu_features2
symbols to compiler-rt for x86/x86_64 which reuse the CPU detection logic under std/zig/system/x86.zig and transform the detected features into the values expected by LLVM-generated code. These values are kept in sync with LLVM using a new tool (tools/update_x86_cpu_model_enums.zig) which parses llvm/include/llvm/TargetParser/X86TargetParser.def in LLVM to generate the corresponding Zig enums.This PR also adds a standalone test that the symbols are emitted when the
target
attribute is used in C.While this functionality could in principle contribute to solving
issue #4591(edit: wrong issue, its #1018), I think it would be best to use a Zig-specific solution for richer CPU detection capabilities, leveraging std.Target.* instead of relying only on the feature set supported by LLVM/libgcc. The changes in this PR would therefore only be useful for C/C++ compatibility with Clang and gcc.