Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(Zluda 3.8.6 for HIP SDK 6.2.4) mini guide (windows) #69

Open
Bercraft opened this issue Jan 26, 2025 · 7 comments
Open

(Zluda 3.8.6 for HIP SDK 6.2.4) mini guide (windows) #69

Bercraft opened this issue Jan 26, 2025 · 7 comments
Labels
documentation Improvements or additions to documentation good first issue Good for newcomers

Comments

@Bercraft
Copy link

Your question

it works at least for me hope it helps others.

  1. Install hip sdk from https://www.amd.com/en/developer/resources/rocm-hub/hip-sdk.html, in this case chose : Windows 10 & 11 -----> 6.2.4 https://www.amd.com/en/developer/resources/rocm-hub/eula/licenses.html?filename=AMD-Software-PRO-Edition-24.Q4-Win10-Win11-For-HIP.exe.
  2. check for hip in your system path variables.
  3. download ZLUDA from https://github.com/lshqqytiger/ZLUDA/releases thanks to lshqqytiger, chose 3.8.7 or 3.8.6 (i am using 3.8.6)
  4. download your gpu files from https://github.com/likelovewant/ROCmLibs-for-gfx1103-AMD780M-APU/releases
  5. Than copy comfui.bat and rename it to comfyui-user.bat, so we do not modify orignal.
  6. Add inside the comfyui-user.bat this lines if you have errors :

set TORCH_BLAS_PREFER_HIPBLASLT=0
set DISABLE_ADDMM_CUDA_LT=1
set ZLUDA_NVRTC_LIB="G:\ComfyUI-Zluda\zluda\nvrtc.dll"

First line is for gpu without hipblast like gfx 1010.
Second line is for a zluda/cuda/pythorch problem, you can omit if works without it.
Third line is from lshqqytiger zluda 3.8.5 "New environment variable ZLUDA_NVRTC_LIB: our new ZLUDA runtime compiler depends on the original NVIDIA runtime compiler library, so you should specify the path of it unless it is named nvrtc_cuda.dll."

set ZLUDA_NVRTC_LIB="G:\ComfyUI-Zluda\zluda\nvrtc.dll" This line pèoints to your zluda folder.

Logs

Other

No response

@yhhao-YH
Copy link

yhhao-YH commented Jan 27, 2025

Thank you for your guide. Before that, I had forgotten to set ZLUDA_NVRTC_LIB, which prevented me from using ComfyUI after launching it.
But in step 4, I also added rocBLAS 4.2.0 for ROCm 6.2.0 for Windows (from https://github.com/YellowRoseCx/koboldcpp-rocm/releases/tag/deps-v6.2.0) to my ROCm files. It seems to have had some effect on my GPU (gfx1102, another GPU that does not support “rocBLAS-HIP6.2.4”).
After doing that, I successfully ran ComfyUI on my PC. Thanks again for your guidance. : )
Because of the mobile graphics card, the performance might be relatively weaker

Image

@patientx patientx added the good first issue Good for newcomers label Jan 31, 2025
@chain2k
Copy link

chain2k commented Feb 15, 2025

GPU: AMD 890M (gfx1150)
Python-version: 3.11.9
Pytorch -version: 2.6.0
Hip-version: 6.2.4
Zluda-version: 3.8.8 nightly
Error-code: "CUBLAS_STATUS_NOT_SUPPORTED "
Solution: [keep short if possible] add "set DISABLE_ADDMM_CUDA_LT=1"
Also my gfx1150 needs hipblast, you can get the libraries here: https://github.com/likelovewant/ROCmLibs-for-gfx1103-AMD780M-APU/releases For my GPU I need - hipblaslt-rocmlibs-for-gfx1100-gfx1101-gfx1102-gfx1103-gfx1150-for.hip6.2.7z and rocm.gfx1150.for.hip.skd.6.2.4.7z
In general thanx for "set DISABLE_ADDMM_CUDA_LT=1" It helps me to run FLUXGYM to train Lora.

@patientx patientx added the documentation Improvements or additions to documentation label Feb 22, 2025
@patientx patientx pinned this issue Feb 22, 2025
@patientx
Copy link
Owner

patientx commented Feb 22, 2025

it works at least for me hope it helps others.

set DISABLE_ADDMM_CUDA_LT=1

Added DISABLE_ADDMM_CUDA_LT=1 to zluda hijacker. It is on by default now as of latest update. Updating to latest version of torch with 5.7.1 (I have this one installed at the moment) is possible without problems now. Should I make latest torch (2.6.x) & zluda (3.90 normal version for 5.7.1 as of writing) default , what do you think , do we need the disable hipblast line as well since that is only available for only a select group of gpu's and only needed with 6.2 at the moment ?

@chain2k
Copy link

chain2k commented Feb 22, 2025

I tried torch 2.6.x with zluda 3.8.8 and cuDNN , my rendering speed almost halved. Without cuDNN is about the same as on 2.3.0. Zluda has been updated, I will try 3.9.0 I will report back. As for enabling hipblast line - I think it is necessary to disable it, but somehow explicitly indicate the possibility of enabling it. Maybe offer a choice to users during installation?

@Bercraft
Copy link
Author

Its better to make an interactive installer with choices during installation.
We should also check for hip version installed.
With the interactive installer there will be problems by people not knowing what to do (what gfx code is their gpu, what do they have to install).
5.7.1 and pytorch 2.3.1 best performance from tests.

@patientx
Copy link
Owner

Yes for the time being I am not going to change anything, people like you are even people who ask in issues are rare some don't even know how to install stuff, for the time being 5.7.1 and torch is gonna stay as it is but I made the changes so at least people can update zluda or torch if they want to.

@chain2k
Copy link

chain2k commented Feb 23, 2025

Tested new ZLUDA with cuDNN, FLUX not working without FP32 and with it render twice as slow. SDXL not working. Without cuDNN (and without FP320 all works and speed is the same as on torch 2.3.x

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

4 participants