Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SPIR-V target? #2683

Closed
jbsolomon opened this issue Jun 14, 2019 · 17 comments
Closed

SPIR-V target? #2683

jbsolomon opened this issue Jun 14, 2019 · 17 comments
Labels
accepted This proposal is planned. arch-spirv Khronos Group SPIR-V proposal This issue suggests modifications. If it also has the "accepted" label then it is planned.
Milestone

Comments

@jbsolomon
Copy link
Contributor

jbsolomon commented Jun 14, 2019

Hello,

What would be involved in adding SPIR-V as a target for Zig? There's a translator from (some subset of?) LLVM to SPIR-V; as I understand it, Zig compiles first to LLVM, so this seems reasonable. There's been some discussion of adding SPIR-V as a Clang target which I'm not sure has materialized further, but I think this would be interesting.

This document goes into detail about the representation of SPIR-V in LLVM:
https://github.com/KhronosGroup/SPIRV-LLVM/blob/khronos/spirv-3.6.1/docs/SPIRVRepresentationInLLVM.rst

@emekoi
Copy link
Contributor

emekoi commented Jun 15, 2019

adding a custom spir-v target might lay the groundwork for adding other backends to zig.

@andrewrk andrewrk added this to the 0.6.0 milestone Jun 16, 2019
@andrewrk andrewrk added the proposal This issue suggests modifications. If it also has the "accepted" label then it is planned. label Jun 16, 2019
@andrewrk
Copy link
Member

The most straightforward way for this to work would be if LLVM supported it directly. However it's still open for discussion to do this even if that scenario does not happen.

@jbsolomon
Copy link
Contributor Author

One potential fruitful direction could be to look at how clang does this:
https://clang.llvm.org/docs/UsersManual.html#opencl-features

It has nvptx64 and amdgcn targets, and can emit LLVM bitcode for them, and the LLVM to SPIR-V translator is also based on libllvm I think, so probably somebody who knows more than me could figure it out. Seems complicated though.

@andrewrk
Copy link
Member

Some advice from @paniq about implementing such a backend:

<lritter> ...there's https://www.khronos.org/registry/spir-v/specs/1.0/SPIRV.html, but also have a look at SpvBuilder in glslang - i made a copy of that one and expanded it a little
<lritter> also, you will need SPIRV Tools for the validator and other stuff. SPIRV Cross can then convert your SPIR-V to GLSL, which is also great for seeing if your stuff produces the code you have in mind

@andrewrk andrewrk added the accepted This proposal is planned. label Jan 8, 2020
@andrewrk andrewrk modified the milestones: 0.6.0, 0.7.0 Jan 8, 2020
@jbsolomon
Copy link
Contributor Author

jbsolomon commented Mar 5, 2020

Hi @andrewrk -- I think LLVM is well on its way to supporting this, but ISTM that it might be necessary or useful to use MLIR. I'm currently exploring this on my own and it definitely seems like a cool project! Far from groking it just yet, but it should be possible to target other GPU backends besides SPIR-V/Vulkan this way, too. IREE is taking this approach to compile from TensorFlow through LLVM.

@BarabasGitHub
Copy link
Contributor

How would this work with defining images, buffers, input/output variables and all the binding slots and stuff? Or would this just be for the OpenCL use case? (Or maybe all of this can be solved 'easily'?)

I see they want to compile C++ to SPIR-V 🤦‍♂ RUN AWAY! 😱

@jbsolomon
Copy link
Contributor Author

jbsolomon commented Mar 6, 2020

I think the focus here should be on generating MLIR / SPIR-V. Tooling and other stuff (like whether / how to use OpenCL types, how to launch and schedule kernels, native GPU types for components, and all that) isn't out of the question, but can hopefully be done as library layers on top of the core functionality. (Counterargument: if the heterogeneous system is all modeled in one IR, LLVM can optimize across the CPU/GPU boundary, as IREE does.)

Otherwise it runs the risk of being too much work to implement or to become useful in a reasonable amount of time.

If adding a SPIR-V backend requires too many {GPU,SPIR-V}-specific features into the core language / compiler, then my proposal is to instead add a feature something like a "Zig @dialect" that can be imported by the build tooling as a library (implemented as MLIR dialects?)

@andrewrk
Copy link
Member

How would this work with defining images, buffers, input/output variables and all the binding slots and stuff?

With inline assembly, or with target-specific builtin functions.

@andrewrk andrewrk modified the milestones: 0.7.0, 0.8.0 Oct 9, 2020
@Sobeston
Copy link
Contributor

https://github.com/EmbarkStudios/rust-gpu does this (its 0.1 release just came out).

@fu5ha
Copy link
Contributor

fu5ha commented Dec 13, 2020

https://github.com/EmbarkStudios/rust-gpu does this (its 0.1 release just came out).

Actually, rust-gpu does not use LLVM's spirv target at all, it manually codegens direct to spirv (the primary motivation for doing it that way is because LLVM's spirv target only supports an opencl/cuda-like execution model, not shader execution models)

@ProkopRandacek
Copy link
Contributor

I'm not sure if a SPIR-V target is the best option, since you never actually
run the binary. The binary is (from my experience) usually embedded into an
program at compile time and sent to GPU at runtime, or rarely given to a
program at runtime and sent to GPU at runtime.

My workflow with SPIR-V is always something like this:

GLSL -> SPIR-V -> .o -> binary -> SPIR-V sent to GPU at runtime
                        ^
other sources -> .o ----+

which makes some sense in this case, since you need extra compiler for the
GLSL -> SPIR-V step.

If Zig adds a SPIR-V backend, the workflow would be the same. But I think we
can do better.

I propose that there would be a builtin function that takes a Zig function and
creates a const array of SPIR-V bytecode at compile time.

This would mean that the same functions can be used by CPU and GPU code, in
contrast to the regular approach where the codebase is split into GPU and CPU
side (since they are two different languages and are compiled with two
different compilers...)

It also frees the programmer of the cumbersome task of embedding files into
executables. Although I have to admit that I have not yet explored the Zig
build system and don't know if this problem is already solved.

I imagine it could look something like this:

fn add(a: u32, b: u32) u32 {
    return a + b;
}

fn compute(gl_GlobalInvocationID: [3]u32, buf: []u32) {
    const idx = gl_GlobalInvocationID[0];
    buf[idx] = add(idx, idx);
}

const spirv: [_]u32 = @spirv(compute);

fn main() {
    // I can call compute()/add() here on the cpu and at the same time can send
    // them to GPU as a compute shader
}

In this case the OpEntryPoint is constructed from the compute function
signature but I am not sure this is the best way. Further thinking required :D


Alternatively Zig could just expand the Type.Fn struct to also include the
function AST (or one of the IRs). This could entirely offload the SPIR-V
compilation to an external library that would construct SPIR-V bytecode from
Type.Fn struct. The external library could then have a more flexible way of
specifying entry point interfaces, spirv target version, optimalisation, ...

It could then look like this:

const spirvc = @import("...");

fn add(a: u32, b: u32) u32 {
    return a + b;
}

fn compute(gl_GlobalInvocationID: [3]u32, buf: []u32) {
    const idx = gl_GlobalInvocationID[0];
    buf[idx] = add(idx, idx);
}

const spirv: [_]u32 = spirvc.compile(@typeInfo(@TypeOf(compute)), ... more arguments specified by the library ...);

fn main() {
    // I can call compute()/add() here on the cpu and at the same time can send
    // them to GPU as a compute shader
}

Should I create a new issue for this proposal?
I think it's neat :D

@wooster0
Copy link

wooster0 commented Oct 24, 2022

This would mean that the same functions can be used by CPU and GPU code, in
contrast to the regular approach where the codebase is split into GPU and CPU
side (since they are two different languages and are compiled with two
different compilers...)

Wouldn't this already be possible? In our case the language and the compiler would be the same, but the code would probably be in an external file.

It also frees the programmer of the cumbersome task of embedding files into
executables.

You might be interested in @embedFile.
So when Zig has SPIR-V, I think status quo would look something like this if I'm not mistaken (the -target part might be wrong, and maybe it'd be build-exe or build-lib instead of build-obj):

# build our shaders:
# of course this could also just be part of build.zig.
zig build-obj -target spir vert.zig; zig build-obj -target spir frag.zig
# compile and run:
# inside the rest of the codebase we `@embedFile` vert.spv and frag.spv and then send it to the GPU at runtime.
# and in our codebase we can always simply `@import` vert.zig or frag.zig and reuse its code.
zig build run

Looks like a pretty nice workflow to me. One less dependency in terms of compiling the shaders as well.

@Snektron
Copy link
Collaborator

Snektron commented Oct 24, 2022

This would mean that the same functions can be used by CPU and GPU code

This is already possible because of Zig's lazy compilation. Just dont reference any of the host functions on the device and vice versa and youre good. The only thing that is required really is conditionally exporting entry points, both for the device and host side:

pub fn frag(...) void {

}

pub fn vert(...) void {

}

pub fn realMain(...) void {

}

usingnamespace if (is_shader) struct {} else { pub const main = realMain; };

comptime {
  if (is_shader) {
    @export("vert", vert, .{});
    @export("frag", frag, .{});
  }
}

(instead of conditionally exporting entry points you could also have multiple files that all @import shared code)

@ProkopRandacek
Copy link
Contributor

Wouldn't this already be possible? In our case the language and the compiler would be the same, but the code would probably be in an external file.

Yeah you are right. Didn't think of that.

It also frees the programmer of the cumbersome task of embedding files into
executables.

You might be interested in @embedFile.

@embedFile looks amazing :D

Looks like a pretty nice workflow to me. One less dependency in terms of compiling the shaders as well.

+1


Just dont reference any of the host functions

What do you mean by host functions? I didn't find anything in the documentation.

@Avokadoen
Copy link

@ProkopRandacek Host is usually referring to the CPU, while device refers to GPU.

Robin is saying that if you write code that should only run on the CPU, then the final spirv output will not contain this code as long as you don't call it from any of your GPU code

@andrewrk
Copy link
Member

andrewrk commented Apr 9, 2023

This is implemented. Separate issues can be filed for follow-up enhancements.

@andrewrk andrewrk closed this as completed Apr 9, 2023
@andrewrk andrewrk added the arch-spirv Khronos Group SPIR-V label Apr 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted This proposal is planned. arch-spirv Khronos Group SPIR-V proposal This issue suggests modifications. If it also has the "accepted" label then it is planned.
Projects
None yet
Development

No branches or pull requests

10 participants