Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

4060 8G爆显存 这个hunyuan大模型经过comfyui优化8G显存都还是不能运行吗 #3

Open
lixida123 opened this issue Jul 25, 2024 · 1 comment

Comments

@lixida123
Copy link

4060 8G爆显存 这个hunyuan大模型经过comfyui优化8G显存都还是不能运行吗
Error occurred when executing DiffusersCLIPLoader:

Allocation on device 0 would exceed allowed memory. (out of memory)
Currently allocated : 7.22 GiB
Requested : 183.67 MiB
Device limit : 8.00 GiB
Free (according to CUDA): 0 bytes
PyTorch limit (set by user-supplied memory fraction)
: 17179869184.00 GiB

File "D:\comfyui\ComfyUI-aki-v1.3\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\comfyui\ComfyUI-aki-v1.3\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\comfyui\ComfyUI-aki-v1.3\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\comfyui\ComfyUI-aki-v1.3\custom_nodes\Comfyui-HunyuanDiT\nodes.py", line 163, in load_clip
out = CLIP(False, root, CLIP_PATH, t5_file)
File "D:\comfyui\ComfyUI-aki-v1.3\custom_nodes\Comfyui-HunyuanDiT\clip.py", line 41, in init
clip_text_encoder.eval().to(self.device)
File "D:\comfyui\ComfyUI-aki-v1.3\python\lib\site-packages\transformers\modeling_utils.py", line 2556, in to
return super().to(*args, **kwargs)
File "D:\comfyui\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "D:\comfyui\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\comfyui\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\comfyui\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "D:\comfyui\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)

@pzc163
Copy link
Owner

pzc163 commented Jul 28, 2024

8G只能使用量化后的混元模型和mT5模型了

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants