Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Issue]: Compatibility with Apple M1 #347

Closed
ballenvironment opened this issue Apr 21, 2023 · 33 comments
Closed

[Issue]: Compatibility with Apple M1 #347

ballenvironment opened this issue Apr 21, 2023 · 33 comments
Labels
enhancement New feature or request help wanted Extra attention is needed

Comments

@ballenvironment
Copy link

Issue Description

Doesn't work at all on my M1 Macbook, many models crash the entire process, some start "generating" but at the end fail with the following error:

RuntimeError: Expected query, key, and value to have the same dtype, but got query.dtype: float key.dtype: float and value.dtype: c10::Half instead.

I tried changing some settings like --no-half and "Upcast cross attention layer to float32", didn't work. A1111 is working on Mac M1, not sure what you changed here.

Version Platform Description

Commit Version: 57204b3

@vladmandic
Copy link
Owner

yes, I'm aware. new torch is very different and i have open ask for M1 community to provide proper package instructions - what exactly should be installed automatically. i don't have M1 system available, so this is up to community to provide me some guidance. but so far i have not received any inputs.

it was the same for AMD and that community is very active so there has been a lot of progress.

@vladmandic vladmandic added enhancement New feature or request help wanted Extra attention is needed labels Apr 21, 2023
@vladmandic vladmandic changed the title [Issue]: Doesn't work Mac M1 [Issue]: Compatibility with Apple M1 Apr 21, 2023
@jasoncwade
Copy link

jasoncwade commented Apr 21, 2023

I got the same error on my MBA M2 and fixed it, but not completely sure how. I think it had something to do with the steps below.

I copied the "webui-macos-env.sh" file I had in my Automatic1111 install over. I commented out every line in it except two:
export TORCH_COMMAND="pip install torch==1.12.1 torchvision==0.13.1"
and
export PYTORCH_ENABLE_MPS_FALLBACK=1

I deleted my venv and then ran webui.sh. It installed those two versions of torch and torchvision. It then threw an error and crashed so I commented out that torch command.

I did not delete the 1.12.1 install and then re-ran web.sh. It installed torch 2.0 and since then has been working.

I left the export PYTORCH_ENABLE_MPS_FALLBACK=1 line uncommented as it solved a different error.

Edit: Forgot to mention I'm running build 7c684a8.

@naijim
Copy link

naijim commented Apr 22, 2023

Maybe this discussion is helpful:

AUTOMATIC1111/stable-diffusion-webui#7453

@vladmandic
Copy link
Owner

vladmandic commented Apr 23, 2023

based on community feedback so far, i've made two changes:

  • enable PYTORCH_ENABLE_MPS_FALLBACK=1
  • make tensorflow package optional as mac installs frequently complain about it
  • apple users should set options in UI -> Settings -> CUDA params: --no-half and --no-half-vae

let me know what else do you think should be done to support apple platform?

@WojtekKowaluk
Copy link

Instead of no-half I recommend "Enable upcast sampling" on Stable Diffusion tab.

@nybe
Copy link

nybe commented Apr 23, 2023

ok... tried some of the above stuff and now am getting:

ModuleNotFoundError: No module named 'clip'

@lauretta91
Copy link

I get the same - illegal hardware instruction when running ./webui.sh.

Just to confirm with other users, what file are you running?

@krummrey
Copy link

krummrey commented Apr 24, 2023

I've tried to install it just now (Apr. 24th 10 AM GMT) from a fresh git clone.
Apple M1 Pro / 32 GB
OSX Version 13.2.1 (22D68)

Various errors during install, mostly from not having xcode installed. Full log here on Pastebin
The automatic download of the default model didn't work. Installed a model manually

After installing xcode all failed pip installs worked

source venv/bin/activate
xcode-select --install
pip install --upgrade pip
pip install --upgrade basicsr
pip install --upgrade gfpgan
pip install --upgrade lmdb
Pip install --upgrade realesrgan

Launched with ./webui.sh. Starts but fails to generate an image.

0:00:00loc("mps_add"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/9e200cfa-7d96-11ed-886f-a23c4f261b56/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":228:0)): 
error: input types 'tensor<2x1280xf32>' and 'tensor<1280xf16>' are not broadcast compatible
LLVM ERROR: Failed to infer result type(s).
zsh: abort      ./webui.sh

@jasoncwade
Copy link

Following up on my earlier post, after the changes vladmandic made I removed the webui-macos-env.sh file (since what I was using it for is now built-in) and everything has been working for me.

I have "Use full precision for VAE" checked under CUDA Settings (but not "Use full precision for model") and "Enable upcast sampling" under Stable Diffusion settings. I think those are the only settings I've changed from the default that would affect image generation.

I'm currently running build 98adfb3.

@nniikkllaass
Copy link

In my case the installation works fine on the M1. It does not download the default sd v.1.5 but I imported it to the SD folder so it works fine. As soon as I press the generate button I get an error message with the information that python crashed...I tried different models but nothing helped.

@nybe
Copy link

nybe commented Apr 24, 2023

Finally got it installed and running but cannot create images.

errors:

Initializing ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0% -:--:-- 0:00:00loc("mps_add"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/97f6331a-ba75-11ed-a4bc-863efbbaf80d/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":228:0)): error: input types 'tensor<2x1280xf32>' and 'tensor<1280xf16>' are not broadcast compatible
LLVM ERROR: Failed to infer result type(s).
zsh: abort ./webui.sh

@hs1n
Copy link

hs1n commented Apr 25, 2023

Continue to #337

I'am able to generate image with DPM++ 2M Karras sampler after checking
UI > Settings > CUDA Settings

  • Enable upcast sampling. Usually produces similar results to --no-half with better performance while using less memory

then Apply Settings

@lauretta91
Copy link

Finally got it installed and running but cannot create images.

errors:

Initializing ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0% -:--:-- 0:00:00loc("mps_add"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/97f6331a-ba75-11ed-a4bc-863efbbaf80d/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":228:0)): error: input types 'tensor<2x1280xf32>' and 'tensor<1280xf16>' are not broadcast compatible LLVM ERROR: Failed to infer result type(s). zsh: abort ./webui.sh

I finally got it. I had to swap the venv folder with the one from Automatic1111. It seems that the previous one was downgrading Python to 3.9.7 (from Python 3.10), therefore, it wasn't running.

Once initialized, I changed the Sampler to Euler a. The default one is Unipc, which doesn't work and gives me the below error.
error: input types 'tensor<2x1280xf32>' and 'tensor<1280xf16>' are not broadcast compatible

@hs1n
Copy link

hs1n commented Apr 25, 2023

512x512 image generation speed on MBP M1 16G:
Sampler: DPM++ 2M Karras, steps: 20

this project(--medvram AND upcast-sampling checked):
00:32<00:00, 1.62s/it
webui(--skip-torch-cuda-test --upcast-sampling --disable-nan-check --medvram --opt-sub-quad-attention --use-cpu interrogate):
00:32<00:00, 1.61s/it

Almost the same.

@lauretta91
Copy link

if I change the venv folder with the a1111 one I get this: "No module 'xformers'. Proceeding without it." which results in this error: "ModuleNotFoundError: No module named 'clip'"

It's normal that you get no module xformers as this is available only for NVIDIA GPUs. And we are using a macOS, which doesn't support an NVIDIA GPU.

For CLIP, you install it manually from openai/clip (https://github.com/openai/CLIP). I am scared you will be asked to install other requirements after CLIP. If this is the case, you can run the requirements.txt file with the below command:
python install -r requirements.txt

Note that I had this already installed from the automatic1111, so you might want to use the same.

@nniikkllaass
Copy link

if I change the venv folder with the a1111 one I get this: "No module 'xformers'. Proceeding without it." which results in this error: "ModuleNotFoundError: No module named 'clip'"

It's normal that you get no module xformers as this is available only for NVIDIA GPUs. And we are using a macOS, which doesn't support an NVIDIA GPU.

For CLIP, you install it manually from openai/clip (https://github.com/openai/CLIP). I am scared you will be asked to install other requirements after CLIP. If this is the case, you can run the requirements.txt file with the below command: python install -r requirements.txt

Note that I had this already installed from the automatic1111, so you might want to use the same.

Does it work for you to generate images and so on, if yes...what did you change that it works for you? In my case it looks good while running the sirup but as soon as I want to generate python quits...

@lauretta91
Copy link

if I change the venv folder with the a1111 one I get this: "No module 'xformers'. Proceeding without it." which results in this error: "ModuleNotFoundError: No module named 'clip'"

It's normal that you get no module xformers as this is available only for NVIDIA GPUs. And we are using a macOS, which doesn't support an NVIDIA GPU.
For CLIP, you install it manually from openai/clip (https://github.com/openai/CLIP). I am scared you will be asked to install other requirements after CLIP. If this is the case, you can run the requirements.txt file with the below command: python install -r requirements.txt
Note that I had this already installed from the automatic1111, so you might want to use the same.

Does it work for you to generate images and so on, if yes...what did you change that it works for you? In my case it looks good while running the sirup but as soon as I want to generate python quits...

Yes, it works. I have followed what @hs1n suggested to do, i.e., changed the sampling method to something different from the default (UniPc), and enabled upcast sampling under Settings > CUDA Settings.

@nniikkllaass
Copy link

Python does not crash anymore but I get this error: NotImplementedError: The operator 'aten::_linalg_solve_ex.result' is not
currently implemented for the MPS device. If you want this op to be added in
priority during the prototype phase of this feature, please comment on
pytorch/pytorch#77764. As a temporary fix, you can set
the environment variable PYTORCH_ENABLE_MPS_FALLBACK=1 to use the CPU as a
fallback for this op. WARNING: this will be slower than running natively on MPS.

if I change the venv folder with the a1111 one I get this: "No module 'xformers'. Proceeding without it." which results in this error: "ModuleNotFoundError: No module named 'clip'"

It's normal that you get no module xformers as this is available only for NVIDIA GPUs. And we are using a macOS, which doesn't support an NVIDIA GPU.
For CLIP, you install it manually from openai/clip (https://github.com/openai/CLIP). I am scared you will be asked to install other requirements after CLIP. If this is the case, you can run the requirements.txt file with the below command: python install -r requirements.txt
Note that I had this already installed from the automatic1111, so you might want to use the same.

Does it work for you to generate images and so on, if yes...what did you change that it works for you? In my case it looks good while running the sirup but as soon as I want to generate python quits...

Yes, it works. I have followed what @hs1n suggested to do, i.e., changed the sampling method to something different from the default (UniPc), and enabled upcast sampling under Settings > CUDA Settings.

@lauretta91
Copy link

lauretta91 commented Apr 25, 2023

error: NotImplementedError: The operator 'aten::_linalg_solve_ex.result' is not
currently implemented for the MPS device. If you want this op to be added in
priority during the prototype phase of this feature, please comment on
pytorch/pytorch#77764. As a temporary fix, you can set
the environment variable PYTORCH_ENABLE_MPS_FALLBACK=1 to use the CPU as a
fallback for this op. WARNING: this will be slower than running natively on MPS.

Yep. It seems you need to run this to use CPU only:

export PYTORCH_ENABLE_MPS_FALLBACK=1

Before running the ./webui.sh

The error suggests that you are trying to use a PyTorch function that is not yet implemented for the MPS device. MPS (Memory Pooling System) is a technology that allows for efficient memory management on NVIDIA GPUs, but not all PyTorch functions are currently supported on MPS.

Otherwise, you can try to downgrade pytorch and rerun the file.

I would also add the error in the link given.

@Scholar01
Copy link
Contributor

On my Mac M1 with 32GB, I was able to run it smoothly by simply enabling PYTORCH_ENABLE_MPS_FALLBACK=1, and it seems that there is no need to set (--no-half), (--no-half-vae), or (upcast sampling). However, I am not sure how it will perform on a device with 16GB. I have submitted the PR and will try it on a 16GB device later.

@nybe
Copy link

nybe commented Apr 25, 2023

Finally got it installed and running but cannot create images.

errors:

Initializing ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0% -:--:-- 0:00:00loc("mps_add"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/97f6331a-ba75-11ed-a4bc-863efbbaf80d/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":228:0)): error: input types 'tensor<2x1280xf32>' and 'tensor<1280xf16>' are not broadcast compatible LLVM ERROR: Failed to infer result type(s). zsh: abort ./webui.sh

FYI: I seem to have fixed my install problems by installing tensorflow-metal for Apple silicon...
https://developer.apple.com/metal/tensorflow-plugin/

@nybe
Copy link

nybe commented Apr 25, 2023

Had more issues this morning, installed PyTorch and it seems to have fixed the issues:

https://pytorch.org/get-started/locally/#macos-version

@hs1n
Copy link

hs1n commented Apr 26, 2023

if I change the venv folder with the a1111 one I get this: "No module 'xformers'. Proceeding without it." which results in this error: "ModuleNotFoundError: No module named 'clip'"

It's normal that you get no module xformers as this is available only for NVIDIA GPUs. And we are using a macOS, which doesn't support an NVIDIA GPU.

For CLIP, you install it manually from openai/clip (https://github.com/openai/CLIP). I am scared you will be asked to install other requirements after CLIP. If this is the case, you can run the requirements.txt file with the below command: python install -r requirements.txt

Note that I had this already installed from the automatic1111, so you might want to use the same.

Possible fix for experiencing No module named 'clip'" is launch with ./webui.sh --upgrade, then you shall see

10:31:14-498482 INFO     Installing packages
10:31:14-498880 INFO     Installing package: git+https://github.com/openai/CLIP.git@d50d76daa670286dd6cacf3bcd80b5e4823fc8e1

For ./webui.sh user, launch with venv is by default. If you think your environment is corrupted. I suggest you remove venv directory and start over.

If you want to check/install requirements manually

If you use webui.sh for launch, make sure you are activated venv.

source venv/bin/activate
pip install -r requirements.txt
deactivate

Plus, you can use which pip to locate which pip you are using.
e.g.

$ which pip
/Users/username/Projects/automatic/venv/bin/pip

@moebis
Copy link

moebis commented Apr 26, 2023

FYI: I seem to have fixed my install problems by installing tensorflow-metal for Apple silicon... https://developer.apple.com/metal/tensorflow-plugin/

@nybe Are you able to run automatic1111 or Vlad Diffusion inside of miniconda? Because those instructions from Apple look promising. One issue I have is that pip3 is installing modules outside of the normal paths. I also think the Apple version of tensor flow with metal support might allow us to utilize our GPU's instead of just the CPUs (maybe even the NPUs too?)

@Turingtestmictest
Copy link

Hi, I'm new to installing from Github. I'm on a Mac Mini M1, 16GB Ram.
I tried Vladmandic install today and got these errors:
ValueError('Unrecognised argument(s): %s' % keys)
ValueError: Unrecognised argument(s): encoding

I have no idea how to solve this as I do not have a background in coding or Github in general.

Then, I tried installing automatic1111, and got this error:
error: unrecognized arguments: --skip-torch-cuda-test --upcast-sampling --no-half-vae

Any suggestions welcome!
Thanks in Advance

@vladmandic
Copy link
Owner

nothing to do with M1, those command line flags are removed and moved to UI settings.

@nybe
Copy link

nybe commented Apr 26, 2023

FYI: I seem to have fixed my install problems by installing tensorflow-metal for Apple silicon... https://developer.apple.com/metal/tensorflow-plugin/

@nybe Are you able to run automatic1111 or Vlad Diffusion inside of miniconda? Because those instructions from Apple look promising. One issue I have is that pip3 is installing modules outside of the normal paths. I also think the Apple version of tensor flow with metal support might allow us to utilize our GPU's instead of just the CPUs (maybe even the NPUs too?)

@moebis Been running A1111 right outta the box but I did install miniconda and it seems to have fixed alot of initial issues I was having running Vlad... I'm sure I've made a mess out of my Mac with all the stuff I've been trying but it's all smooth now (knock wood)

@Turingtestmictest
Copy link

nothing to do with M1, those command line flags are removed and moved to UI settings.

Sorry I made a mistake of not installing Python and Git. I installed those, and got a lot of errors, here is the whole install process:

18:15:25-561048 INFO Python 3.10.11 on Darwin
18:15:25-615383 INFO Version: 93b0de7 Wed Apr 26 09:02:32 2023 -0400
18:15:25-625822 INFO Setting environment tuning
18:15:25-626768 INFO Using CPU-only Torch
18:15:25-627221 INFO Installing package: torch torchaudio torchvision
18:15:43-098967 INFO Torch 2.0.0
18:15:43-099672 WARNING Torch repoorts CUDA not available
18:15:43-100126 INFO Installing package: tensorflow==2.12.0
18:15:43-481609 INFO Verifying requirements
18:15:43-482945 INFO Installing package: addict
18:15:43-862758 INFO Installing package: aenum
18:15:44-345773 INFO Installing package: aiohttp
18:15:45-601297 INFO Installing package: anyio
18:15:46-128412 INFO Installing package: appdirs
18:15:46-567267 INFO Installing package: astunparse
18:15:47-138983 INFO Installing package: basicsr
18:16:04-278508 ERROR Error running pip: install --upgrade basicsr
18:16:04-279861 INFO Installing package: bitsandbytes
18:16:10-110717 INFO Installing package: blendmodes
18:16:10-629552 INFO Installing package: clean-fid
18:16:25-898175 ERROR Error running pip: install --upgrade clean-fid
18:16:25-900162 INFO Installing package: easydev
18:16:26-991828 INFO Installing package: extcolors
18:16:27-711164 INFO Installing package: facexlib
18:16:39-267422 ERROR Error running pip: install --upgrade facexlib
18:16:39-269434 INFO Installing package: filetype
18:16:39-712029 INFO Installing package: font-roboto
18:16:40-816434 INFO Installing package: fonts
18:16:41-236054 INFO Installing package: future
18:16:42-277069 INFO Installing package: gdown
18:16:43-034890 INFO Installing package: gfpgan
18:16:54-793158 ERROR Error running pip: install --upgrade gfpgan
18:16:54-795055 INFO Installing package: GitPython
18:16:55-509967 INFO Installing package: httpcore
18:16:56-067820 INFO Installing package: inflection
18:16:56-500965 INFO Installing package: jsonmerge
18:16:57-475407 INFO Installing package: kornia
18:16:58-310407 INFO Installing package: lark
18:16:58-819990 INFO Installing package: lmdb
18:17:02-080513 INFO Installing package: lpips
18:17:12-950066 ERROR Error running pip: install --upgrade lpips
18:17:12-951970 INFO Installing package: numpy
18:17:13-430591 INFO Installing package: omegaconf
18:17:14-523699 INFO Installing package: open-clip-torch
18:17:16-672044 INFO Installing package: opencv-contrib-python
18:17:19-550930 INFO Installing package: piexif
18:17:20-082210 INFO Installing package: Pillow
18:17:20-620975 INFO Installing package: psutil
18:17:21-336048 INFO Installing package: pyyaml
18:17:21-780685 INFO Installing package: realesrgan
18:17:31-208779 ERROR Error running pip: install --upgrade realesrgan
18:17:31-209904 INFO Installing package: requests
18:17:31-657894 INFO Installing package: resize-right
18:17:32-183773 INFO Installing package: rich
18:17:32-642262 INFO Installing package: safetensors
18:17:34-486399 ERROR Error running pip: install --upgrade safetensors
18:17:34-487497 INFO Installing package: scipy
18:17:45-312038 ERROR Error running pip: install --upgrade scipy
18:17:45-313956 INFO Installing package: tb_nightly
18:21:15-236585 INFO Installing package: toml
18:21:15-834654 INFO Installing package: torch
18:21:16-293210 INFO Installing package: torchdiffeq
18:21:30-083235 ERROR Error running pip: install --upgrade torchdiffeq
18:21:30-085018 INFO Installing package: torchsde
18:21:41-146777 ERROR Error running pip: install --upgrade torchsde
18:21:41-148515 INFO Installing package: torchvision
18:21:41-626291 INFO Installing package: tqdm
18:21:42-095709 INFO Installing package: voluptuous
18:21:42-728309 INFO Installing package: yapf
18:21:43-416383 INFO Installing package: scikit-image
18:21:52-278481 ERROR Error running pip: install --upgrade scikit-image
18:21:52-279724 INFO Installing package: accelerate==0.18.0
18:21:53-050059 INFO Installing package: opencv-python==4.7.0.72
18:21:53-946692 INFO Installing package: diffusers==0.15.0
18:21:55-056057 INFO Installing package: einops==0.4.1
18:21:55-639620 INFO Installing package: gradio==3.23.0
18:22:05-011137 INFO Installing package: numexpr==2.8.4
18:22:06-039711 INFO Installing package: pandas==1.5.3
18:22:09-831249 INFO Installing package: protobuf==3.20.3
18:22:10-514654 INFO Installing package: pytorch_lightning==1.9.4
18:22:11-971723 INFO Installing package: transformers==4.26.1
18:22:14-710458 ERROR Error running pip: install --upgrade
transformers==4.26.1
18:22:14-711567 INFO Installing package: timm==0.6.13
18:22:15-373248 INFO Installing package: tomesd==0.1.2
18:22:16-240208 INFO Running setup
18:22:16-241050 INFO Installing packages
18:22:16-241458 INFO Installing package:
git+https://github.com/openai/CLIP.git@d50d76daa670286d
d6cacf3bcd80b5e4823fc8e1
18:22:19-889241 INFO Installing repositories
18:22:50-993989 INFO Installing submodules
18:23:36-086002 INFO Updating submodules
18:23:44-216426 INFO Extensions enabled: ['SwinIR',
'sd-extension-steps-animation',
'clip-interrogator-ext',
'sd-extension-aesthetic-scorer',
'sd-dynamic-thresholding', 'prompt-bracket-checker',
'sd-webui-controlnet', 'ScuNET',
'stable-diffusion-webui-rembg',
'sd-webui-model-converter', 'Lora',
'sd-extension-system-info',
'stable-diffusion-webui-images-browser', 'LDSR',
'seed_travel',
'multidiffusion-upscaler-for-automatic1111',
'a1111-sd-webui-lycoris']
18:23:53-486592 ERROR Error running extension installer:
/Users/backto1/Desktop/Vladmandic/automatic-master/auto
matic/extensions-builtin/clip-interrogator-ext/install.
py
18:25:21-376146 ERROR Error running extension installer:
/Users/backto1/Desktop/Vladmandic/automatic-master/auto
matic/extensions-builtin/stable-diffusion-webui-rembg/i
nstall.py
18:25:26-597108 INFO Extensions enabled: []
18:25:26-597989 INFO Updating Wiki
18:25:27-836033 WARNING Setup complete with errors (14)
18:25:27-838524 WARNING See log file for more details: setup.log
18:25:27-841121 INFO Running extension preloading
18:25:27-851996 INFO Server arguments: []
Error in sys.excepthook:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/rich/traceback.py", line 103, in excepthook
Traceback.from_exception(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/rich/traceback.py", line 346, in from_exception
return cls(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/rich/traceback.py", line 280, in init
for suppress_entity in suppress:
TypeError: 'module' object is not iterable

Original exception was:
Traceback (most recent call last):
File "/Users/backto1/Desktop/Vladmandic/automatic-master/automatic/launch.py", line 99, in
import webui
File "/Users/backto1/Desktop/Vladmandic/automatic-master/automatic/webui.py", line 21, in
from modules import import_hook # pylint: disable=W0611,C0411,C0412
File "/Users/backto1/Desktop/Vladmandic/automatic-master/automatic/modules/import_hook.py", line 2, in
from modules.shared import opts
File "/Users/backto1/Desktop/Vladmandic/automatic-master/automatic/modules/shared.py", line 231, in
default_checkpoint = list_checkpoint_tiles()[0] if len(list_checkpoint_tiles()) > 0 else "model.ckpt"
File "/Users/backto1/Desktop/Vladmandic/automatic-master/automatic/modules/shared.py", line 188, in list_checkpoint_tiles
import modules.sd_models # pylint: disable=W0621
File "/Users/backto1/Desktop/Vladmandic/automatic-master/automatic/modules/sd_models.py", line 11, in
import safetensors.torch

@system1system2
Copy link

AFAIK, the most competent person on Apple Silicon compatibility & optimizations for Stable Diffusion/A1111 is
@brkirch. @vladmandic, you should see if he/she/they want to help.

This is his/her/their experimental fork of A1111 optimized for M1: https://github.com/brkirch/stable-diffusion-webui/tree/mac-builds-experimental

I have an M2 Max with 96GB RAM and I'd be happy to test any configuration you want to test.

@ptppan
Copy link

ptppan commented Apr 26, 2023

I kneel. It works on Mac now, thank you so much for the devs! I am trying to img2img at 1408x1408 resolution (which works on A1111), but on Vladomatic I get this error:

/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Types/MPSNDArray.mm:725: failed assertion `[MPSNDArray initWithDevice:descriptor:] Error: total bytes of NDArray > 2**32'
zsh: abort      ./webui.sh
@MacBook-Pro-7 automatic % /Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '

Does anyone else get this issue? I am using the M2 Max with 64GB ram and gens at that res work on A1111.

@vladmandic
Copy link
Owner

ok, so all community suggestions on what to do for defaults on M1 setups have been added and I haven't seen any further updates to those on this thread, so I'll close it.
if there are any remaining issues or further tuning needed, lets start with the new thread as there is a lot of history here.

note that there are still quite a few reports on binary incompatibilities on some systems - unless they involve suggestions to install different version of a binary package causing errors, i really cannot help as i cannot debug underlying binary packages.

@vladmandic
Copy link
Owner

AFAIK, the most competent person on Apple Silicon compatibility & optimizations for Stable Diffusion/A1111 is @brkirch.

Thanks, I've reached out - will see what happens.

@ZachNagengast
Copy link

Sharing my anecdotal evidence using an M2 mac because I tried everything in this thread and nothing worked:

I was using python3.9 before

When I tried running it with a fresh install of python3.10, it worked first try. No idea why, but hopefully that's helpful to someone in my same situation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests