-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Basic extension catalog for kernels #455
Comments
pdamme
added a commit
that referenced
this issue
Sep 7, 2023
- The DAPHNE compiler usually lowers most domain-specific operations to calls to pre-compiled kernels. - So far, the DAPHNE compiler did not know which kernel instantiations are available in pre-compiled form. - Instead, it generated the expected function name of a kernel based on the DaphneIR operation's mnenomic, its result/argument types, and the processing backend (e.g., CPP or CUDA). - If the expected kernel was not available, an error of the form "JIT session error: Symbols not found: ..." occurred during LLVM JIT compilation. - This commit introduces a kernel catalog that informs the DAPHNE compiler about the available pre-compiled kernels. - The kernel catalog stores a mapping from DaphneIR ops (represented by their mnemonic) to information on kernels registered for the op. - The information stored for each kernel comprises: the name of the pre-compiled C/C++ function, the result/argument types, the processing backend (e.g., CPP or CUDA) - The kernel catalog provides methods for registering a kernel, retrieving the registered kernels for a specific op, and for dumping the catalog. - The kernel catalog is stored inside the DaphneUserConfig. - Makes sense since users will be able to configure the available kernels in the future. - That way, the kernel catalog is accessible in all parts of the DAPHNE compiler and runtime. - The information on the available kernels is stored in a JSON file named catalog.json (or CUDAcatalog.json). - Currently, catalog.json is generated by genKernelInst.py, thus access to the same kernel specializations as before - catalog.json is read at DAPHNE system start-up in the coordinator and distributed workers. - Added a parser for the kernel catalog JSON file. - RewriteToCallKernelOpPass uses the kernel catalog to obtain the kernel function name for an operation, instead of relying on a naming convention. - However, there are still a few points where kernel function names are built by convention (to be addressed later): - lowering of DistributedPipelineOp in RewriteToCallKernelOpPass - lowering of MapOp in LowerToLLVMPass - lowering of VectorizedPipelineOp in LowerToLLVMPass - Directly related misc changes - DaphneIrExecutor has getters for its DaphneUserConfig. - CompilerUtils::mlirTypeToCppTypeName() allows generating either underscores (as before) or angle brackets (new) for template parameters. - This is a first step towards extensibility w.r.t. the kernels, for now the main contribution is the representation of the available kernels in a data structure (the kernel catalog). - Contributes to #455, but doesn't close it yet.
pdamme
added a commit
that referenced
this issue
Apr 4, 2024
- The DAPHNE compiler usually lowers most domain-specific operations to calls to pre-compiled kernels. - So far, the DAPHNE compiler did not know which kernel instantiations are available in pre-compiled form. - Instead, it generated the expected function name of a kernel based on the DaphneIR operation's mnenomic, its result/argument types, and the processing backend (e.g., CPP or CUDA). - If the expected kernel was not available, an error of the form "JIT session error: Symbols not found: ..." occurred during LLVM JIT compilation. - This commit introduces a kernel catalog that informs the DAPHNE compiler about the available pre-compiled kernels. - The kernel catalog stores a mapping from DaphneIR ops (represented by their mnemonic) to information on kernels registered for the op. - The information stored for each kernel comprises: the name of the pre-compiled C/C++ function, the result/argument types, the processing backend (e.g., CPP or CUDA) - The kernel catalog provides methods for registering a kernel, retrieving the registered kernels for a specific op, and for dumping the catalog. - The kernel catalog is stored inside the DaphneUserConfig. - Makes sense since users will be able to configure the available kernels in the future. - That way, the kernel catalog is accessible in all parts of the DAPHNE compiler and runtime. - The information on the available kernels is stored in a JSON file named catalog.json (or CUDAcatalog.json). - Currently, catalog.json is generated by genKernelInst.py; thus, the system has access to the same kernel specializations as before. - catalog.json is read at DAPHNE system start-up in the coordinator and distributed workers. - Added a parser for the kernel catalog JSON file. - RewriteToCallKernelOpPass uses the kernel catalog to obtain the kernel function name for an operation, instead of relying on a naming convention. - However, there are still a few points where kernel function names are built by convention (to be addressed later): - lowering of DistributedPipelineOp in RewriteToCallKernelOpPass - lowering of MapOp in LowerToLLVMPass - lowering of VectorizedPipelineOp in LowerToLLVMPass - Directly related misc changes - DaphneIrExecutor has getters for its DaphneUserConfig. - CompilerUtils::mlirTypeToCppTypeName() allows generating either underscores (as before) or angle brackets (new) for template parameters. - This is a first step towards extensibility w.r.t. the kernels, for now the main contribution is the representation of the available kernels in a data structure (the kernel catalog). - Contributes to #455, but doesn't close it yet.
pdamme
added a commit
that referenced
this issue
Apr 10, 2024
- The DAPHNE compiler usually lowers most domain-specific operations to calls to pre-compiled kernels. - So far, the DAPHNE compiler did not know which kernel instantiations are available in pre-compiled form. - Instead, it generated the expected function name of a kernel based on the DaphneIR operation's mnenomic, its result/argument types, and the processing backend (e.g., CPP or CUDA). - If the expected kernel was not available, an error of the form "JIT session error: Symbols not found: ..." occurred during LLVM JIT compilation. - This commit introduces a kernel catalog that informs the DAPHNE compiler about the available pre-compiled kernels. - The kernel catalog stores a mapping from DaphneIR ops (represented by their mnemonic) to information on kernels registered for the op. - The information stored for each kernel comprises: the name of the pre-compiled C/C++ function, the result/argument types, the processing backend (e.g., CPP or CUDA) - The kernel catalog provides methods for registering a kernel, retrieving the registered kernels for a specific op, and for dumping the catalog. - The kernel catalog is stored inside the DaphneUserConfig. - Makes sense since users will be able to configure the available kernels in the future. - That way, the kernel catalog is accessible in all parts of the DAPHNE compiler and runtime. - The information on the available kernels is stored in a JSON file named catalog.json (or CUDAcatalog.json). - Currently, catalog.json is generated by genKernelInst.py; thus, the system has access to the same kernel specializations as before. - catalog.json is read at DAPHNE system start-up in the coordinator and distributed workers. - Added a parser for the kernel catalog JSON file. - RewriteToCallKernelOpPass uses the kernel catalog to obtain the kernel function name for an operation, instead of relying on a naming convention. - However, there are still a few points where kernel function names are built by convention (to be addressed later): - lowering of DistributedPipelineOp in RewriteToCallKernelOpPass - lowering of MapOp in LowerToLLVMPass - lowering of VectorizedPipelineOp in LowerToLLVMPass - Directly related misc changes - DaphneIrExecutor has getters for its DaphneUserConfig. - CompilerUtils::mlirTypeToCppTypeName() allows generating either underscores (as before) or angle brackets (new) for template parameters. - This is a first step towards extensibility w.r.t. the kernels, for now the main contribution is the representation of the available kernels in a data structure (the kernel catalog). - Contributes to #455, but doesn't close it yet.
pdamme
added a commit
that referenced
this issue
Apr 19, 2024
- The DAPHNE compiler usually lowers most domain-specific operations to calls to pre-compiled kernels. - So far, the DAPHNE compiler did not know which kernel instantiations are available in pre-compiled form. - Instead, it generated the expected function name of a kernel based on the DaphneIR operation's mnenomic, its result/argument types, and the processing backend (e.g., CPP or CUDA). - If the expected kernel was not available, an error of the form "JIT session error: Symbols not found: ..." occurred during LLVM JIT compilation. - This commit introduces an initial version of a kernel catalog that informs the DAPHNE compiler about the available pre-compiled kernels. - The kernel catalog stores a mapping from DaphneIR ops (represented by their mnemonic) to information on kernels registered for the op. - The information stored for each kernel currently comprises: the name of the pre-compiled C/C++ function, the result/argument types, the processing backend (e.g., CPP or CUDA). - The set of information will be extended in the future. - The kernel catalog provides methods for registering a kernel, retrieving the registered kernels for a specific op, and for dumping the catalog. - The kernel catalog is stored inside the DaphneUserConfig. - This makes sense since users will be able to configure the available kernels in the future. - That way, the kernel catalog is accessible in all parts of the DAPHNE compiler and runtime. - The information on the available kernels is currently stored in a JSON file named catalog.json (or CUDAcatalog.json). - Currently, catalog.json is generated by genKernelInst.py; thus, the system has access to the same kernel specializations as before. - catalog.json is read at DAPHNE system start-up in the coordinator and distributed workers. - Added a parser for the kernel catalog JSON file. - The concrete format of the catalog files may be changed in the future (e.g., to make it more efficient or intuitive). - RewriteToCallKernelOpPass uses the kernel catalog to obtain the kernel function name for an operation, instead of relying on a naming convention. - However, there are still a few points where kernel function names are built by convention (to be addressed later): - lowering of DistributedPipelineOp in RewriteToCallKernelOpPass - lowering of MapOp in LowerToLLVMPass - lowering of VectorizedPipelineOp in LowerToLLVMPass - Directly related misc changes: - DaphneIrExecutor has getters for its DaphneUserConfig. - CompilerUtils::mlirTypeToCppTypeName() allows generating either underscores (as before) or angle brackets (new) for template parameters. - This is a first step towards extensibility w.r.t. kernels, for now the main contribution is the representation of the available kernels in a data structure (the kernel catalog). - Closes #455, with an initial solution we can build upon in the future.
pdamme
added a commit
that referenced
this issue
Apr 22, 2024
- The DAPHNE compiler usually lowers most domain-specific operations to calls to pre-compiled kernels. - So far, the DAPHNE compiler did not know which kernel instantiations are available in pre-compiled form. - Instead, it generated the expected function name of a kernel based on the DaphneIR operation's mnenomic, its result/argument types, and the processing backend (e.g., CPP or CUDA). - If the expected kernel was not available, an error of the form "JIT session error: Symbols not found: ..." occurred during LLVM JIT compilation. - This commit introduces an initial version of a kernel catalog that informs the DAPHNE compiler about the available pre-compiled kernels. - The kernel catalog stores a mapping from DaphneIR ops (represented by their mnemonic) to information on kernels registered for the op. - The information stored for each kernel currently comprises: the name of the pre-compiled C/C++ function, the result/argument types, the processing backend (e.g., CPP or CUDA). - The set of information will be extended in the future. - The kernel catalog provides methods for registering a kernel, retrieving the registered kernels for a specific op, and for dumping the catalog. - The kernel catalog is stored inside the DaphneUserConfig. - This makes sense since users will be able to configure the available kernels in the future. - That way, the kernel catalog is accessible in all parts of the DAPHNE compiler and runtime. - The information on the available kernels is currently stored in a JSON file named catalog.json (or CUDAcatalog.json). - Currently, catalog.json is generated by genKernelInst.py; thus, the system has access to the same kernel specializations as before. - catalog.json is read at DAPHNE system start-up in the coordinator and distributed workers. - Added a parser for the kernel catalog JSON file. - The concrete format of the catalog files may be changed in the future (e.g., to make it more efficient or intuitive). - RewriteToCallKernelOpPass uses the kernel catalog to obtain the kernel function name for an operation, instead of relying on a naming convention. - However, there are still a few points where kernel function names are built by convention (to be addressed later): - lowering of DistributedPipelineOp in RewriteToCallKernelOpPass - lowering of MapOp in LowerToLLVMPass - lowering of VectorizedPipelineOp in LowerToLLVMPass - Directly related misc changes: - DaphneIrExecutor has getters for its DaphneUserConfig. - CompilerUtils::mlirTypeToCppTypeName() allows generating either underscores (as before) or angle brackets (new) for template parameters. - This is a first step towards extensibility w.r.t. kernels, for now the main contribution is the representation of the available kernels in a data structure (the kernel catalog). - Closes #455, with an initial solution we can build upon in the future.
pdamme
added a commit
that referenced
this issue
May 3, 2024
- The DAPHNE compiler usually lowers most domain-specific operations to calls to pre-compiled kernels. - So far, the DAPHNE compiler did not know which kernel instantiations are available in pre-compiled form. - Instead, it generated the expected function name of a kernel based on the DaphneIR operation's mnenomic, its result/argument types, and the processing backend (e.g., CPP or CUDA). - If the expected kernel was not available, an error of the form "JIT session error: Symbols not found: ..." occurred during LLVM JIT compilation. - This commit introduces an initial version of a kernel catalog that informs the DAPHNE compiler about the available pre-compiled kernels. - The kernel catalog stores a mapping from DaphneIR ops (represented by their mnemonic) to information on kernels registered for the op. - The information stored for each kernel currently comprises: the name of the pre-compiled C/C++ function, the result/argument types, the processing backend (e.g., CPP or CUDA). - The set of information will be extended in the future. - The kernel catalog provides methods for registering a kernel, retrieving the registered kernels for a specific op, and for dumping the catalog. - The kernel catalog is stored inside the DaphneUserConfig. - This makes sense since users will be able to configure the available kernels in the future. - That way, the kernel catalog is accessible in all parts of the DAPHNE compiler and runtime. - The information on the available kernels is currently stored in a JSON file named catalog.json (or CUDAcatalog.json). - Currently, catalog.json is generated by genKernelInst.py; thus, the system has access to the same kernel specializations as before. - catalog.json is read at DAPHNE system start-up in the coordinator and distributed workers. - Added a parser for the kernel catalog JSON file. - The concrete format of the catalog files may be changed in the future (e.g., to make it more efficient or intuitive). - RewriteToCallKernelOpPass uses the kernel catalog to obtain the kernel function name for an operation, instead of relying on a naming convention. - However, there are still a few points where kernel function names are built by convention (to be addressed later): - lowering of DistributedPipelineOp in RewriteToCallKernelOpPass - lowering of MapOp in LowerToLLVMPass - lowering of VectorizedPipelineOp in LowerToLLVMPass - Directly related misc changes: - DaphneIrExecutor has getters for its DaphneUserConfig. - CompilerUtils::mlirTypeToCppTypeName() allows generating either underscores (as before) or angle brackets (new) for template parameters. - This is a first step towards extensibility w.r.t. kernels, for now the main contribution is the representation of the available kernels in a data structure (the kernel catalog). - Closes #455, with an initial solution we can build upon in the future.
pdamme
added a commit
that referenced
this issue
May 3, 2024
- The DAPHNE compiler usually lowers most domain-specific operations to calls to pre-compiled kernels. - So far, the DAPHNE compiler did not know which kernel instantiations are available in pre-compiled form. - Instead, it generated the expected function name of a kernel based on the DaphneIR operation's mnenomic, its result/argument types, and the processing backend (e.g., CPP or CUDA). - If the expected kernel was not available, an error of the form "JIT session error: Symbols not found: ..." occurred during LLVM JIT compilation. - This commit introduces an initial version of a kernel catalog that informs the DAPHNE compiler about the available pre-compiled kernels. - The kernel catalog stores a mapping from DaphneIR ops (represented by their mnemonic) to information on kernels registered for the op. - The information stored for each kernel currently comprises: the name of the pre-compiled C/C++ function, the result/argument types, the processing backend (e.g., CPP or CUDA). - The set of information will be extended in the future. - The kernel catalog provides methods for registering a kernel, retrieving the registered kernels for a specific op, and for dumping the catalog. - The kernel catalog is stored inside the DaphneUserConfig. - This makes sense since users will be able to configure the available kernels in the future. - That way, the kernel catalog is accessible in all parts of the DAPHNE compiler and runtime. - The information on the available kernels is currently stored in a JSON file named catalog.json (or CUDAcatalog.json). - Currently, catalog.json is generated by genKernelInst.py; thus, the system has access to the same kernel specializations as before. - catalog.json is read at DAPHNE system start-up in the coordinator and distributed workers. - Added a parser for the kernel catalog JSON file. - The concrete format of the catalog files may be changed in the future (e.g., to make it more efficient or intuitive). - RewriteToCallKernelOpPass uses the kernel catalog to obtain the kernel function name for an operation, instead of relying on a naming convention. - However, there are still a few points where kernel function names are built by convention (to be addressed later): - lowering of DistributedPipelineOp in RewriteToCallKernelOpPass - lowering of MapOp in LowerToLLVMPass - lowering of VectorizedPipelineOp in LowerToLLVMPass - Directly related misc changes: - DaphneIrExecutor has getters for its DaphneUserConfig. - CompilerUtils::mlirTypeToCppTypeName() allows generating either underscores (as before) or angle brackets (new) for template parameters. - This is a first step towards extensibility w.r.t. kernels, for now the main contribution is the representation of the available kernels in a data structure (the kernel catalog). - Closes #455, with an initial solution we can build upon in the future.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
So far, DAPHNE uses a fixed set of pre-compiled kernels (as defined in
src/runtime/local/kernels/kernels.json
). Each kernel is specialized for a certain combination of input/output data/value types and processing backend (also called "API" inkernels.json
), such asCPP
,CUDA
, orFPGAOPENCL
. Furthermore, the DAPHNE compiler does not know which kernels are available in pre-compiled form.For the lowering from DaphneIR operations to kernels, we rely on naming conventions and generate the expected kernel function name based on the operation name, input/output types, and processing backend. If a kernel with that name does not exist as a pre-compiled kernel, a typical error is raised (
JIT session error: Symbols not found: ...
).This approach is not good for several reasons:
To solve these problems, and to make a first steps towards extensibility, we need an initial version of an extension catalog of available kernels. This catalog should be populated at system start-up or during DaphneDSL compile-time. It should be used by the DAPHNE compiler to select kernels (based on types and processing backend).
This issue is about a first step in this direction, mainly refactoring the lowering from DaphneIR ops to kernel calls to make it more amenable to extensibility afterwards. Once we have that, we can continue in multiple directions.
The text was updated successfully, but these errors were encountered: