- Metalhead.jl updates
- PR now ready for review
- Needs some small clean-up and tests
- FastAI.jl updates
informal meeting (off-agenda) due to lower attendance and minimal updates
We discussed some priorities for future work:
- GNNs
- Distributed training
We discussed how to integrate Dagger.jl for distributed computing. Probably will use an external package like FluxDagger.jl that translates Flux models into DAGs for execution.
We discussed splitting out CUDA.jl related code from NNlib.jl into a sub-package. This will make working with multiple backends like AMDGPU.jl easier.
We also discussed how some NNlib kernels can be simplified into single implementations eventually with KernelAbstractions.jl.