Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't run this in Google Colab #104

Open
JasonS09 opened this issue Aug 18, 2024 · 0 comments
Open

Can't run this in Google Colab #104

JasonS09 opened this issue Aug 18, 2024 · 0 comments

Comments

@JasonS09
Copy link

I get the following error when it gets to the sample (I tried to run one of the examples). I installed xFormers so I don't know that might be the issue. Happens with three of them that I tested:

!!! Exception during processing !!! No operator found for `memory_efficient_attention_forward` with inputs:
     query       : shape=(80, 16128, 1, 64) (torch.float16)
     key         : shape=(80, 16128, 1, 64) (torch.float16)
     value       : shape=(80, 16128, 1, 64) (torch.float16)
     attn_bias   : <class 'NoneType'>
     p           : 0.0
`decoderF` is not supported because:
    xFormers wasn't build with CUDA support
    attn_bias type is <class 'NoneType'>
    operator wasn't built - see `python -m xformers.info` for more info
`[email protected]` is not supported because:
    xFormers wasn't build with CUDA support
`cutlassF` is not supported because:
    xFormers wasn't build with CUDA support
    operator wasn't built - see `python -m xformers.info` for more info
`smallkF` is not supported because:
    max(query.shape[-1] != value.shape[-1]) > 32
    xFormers wasn't build with CUDA support
    dtype=torch.float16 (supported: {torch.float32})
    operator wasn't built - see `python -m xformers.info` for more info
    unsupported embed per head: 64
Traceback (most recent call last):
  File "/content/drive/MyDrive/ComfyUI/execution.py", line 316, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/content/drive/MyDrive/ComfyUI/execution.py", line 191, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/content/drive/MyDrive/ComfyUI/execution.py", line 168, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/content/drive/MyDrive/ComfyUI/execution.py", line 157, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-DynamiCrafterWrapper/nodes.py", line 634, in process
    samples, _ = ddim_sampler.sample(
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-DynamiCrafterWrapper/lvdm/models/samplers/ddim.py", line 119, in sample
    samples, intermediates = self.ddim_sampling(conditioning, size,
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-DynamiCrafterWrapper/lvdm/models/samplers/ddim.py", line 194, in ddim_sampling
    outs = self.p_sample_ddim(img, cond, ts, sigmas, index=index, use_original_steps=ddim_use_original_steps,
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-DynamiCrafterWrapper/lvdm/models/samplers/ddim.py", line 230, in p_sample_ddim
    e_t_cond = self.model.apply_model(x, t, c, **kwargs)
  File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-DynamiCrafterWrapper/lvdm/models/ddpm3d.py", line 590, in apply_model
    x_recon = self.model(x_noisy, t, c_crossattn=cond["c_crossattn"], c_concat=cond["c_concat"], control=control, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-DynamiCrafterWrapper/lvdm/models/ddpm3d.py", line 753, in forward
    out = self.diffusion_model(xc, t, context=cc, control=control, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-DynamiCrafterWrapper/lvdm/modules/networks/openaimodel3d.py", line 590, in forward
    h = module(h, emb, context=context, batch_size=b, frame_window_size=frame_window_size, frame_window_stride=frame_window_stride)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-DynamiCrafterWrapper/lvdm/modules/networks/openaimodel3d.py", line 48, in forward
    x = layer(x, context)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-DynamiCrafterWrapper/lvdm/modules/attention.py", line 306, in forward
    x = block(x, context=context, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-DynamiCrafterWrapper/lvdm/modules/attention.py", line 241, in forward
    return checkpoint(self._forward, input_tuple, self.parameters(), self.checkpoint)
  File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-DynamiCrafterWrapper/lvdm/common.py", line 94, in checkpoint
    return func(*inputs)
  File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-DynamiCrafterWrapper/lvdm/modules/attention.py", line 245, in _forward
    x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None, mask=mask) + x
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-DynamiCrafterWrapper/lvdm/modules/attention.py", line 177, in efficient_forward
    out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=None)
  File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/__init__.py", line 276, in memory_efficient_attention
    return _memory_efficient_attention(
  File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/__init__.py", line 395, in _memory_efficient_attention
    return _memory_efficient_attention_forward(
  File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/__init__.py", line 414, in _memory_efficient_attention_forward
    op = _dispatch_fw(inp, False)
  File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/dispatch.py", line 119, in _dispatch_fw
    return _run_priority_list(
  File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/dispatch.py", line 55, in _run_priority_list
    raise NotImplementedError(msg)
NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs:
     query       : shape=(80, 16128, 1, 64) (torch.float16)
     key         : shape=(80, 16128, 1, 64) (torch.float16)
     value       : shape=(80, 16128, 1, 64) (torch.float16)
     attn_bias   : <class 'NoneType'>
     p           : 0.0
`decoderF` is not supported because:
    xFormers wasn't build with CUDA support
    attn_bias type is <class 'NoneType'>
    operator wasn't built - see `python -m xformers.info` for more info
`[email protected]` is not supported because:
    xFormers wasn't build with CUDA support
`cutlassF` is not supported because:
    xFormers wasn't build with CUDA support
    operator wasn't built - see `python -m xformers.info` for more info
`smallkF` is not supported because:
    max(query.shape[-1] != value.shape[-1]) > 32
    xFormers wasn't build with CUDA support
    dtype=torch.float16 (supported: {torch.float32})
    operator wasn't built - see `python -m xformers.info` for more info
    unsupported embed per head: 64
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant