-
-
Notifications
You must be signed in to change notification settings - Fork 453
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Issue]: Compatibility with Apple M1 #347
Comments
yes, I'm aware. new torch is very different and i have open ask for M1 community to provide proper package instructions - what exactly should be installed automatically. i don't have M1 system available, so this is up to community to provide me some guidance. but so far i have not received any inputs. it was the same for AMD and that community is very active so there has been a lot of progress. |
I got the same error on my MBA M2 and fixed it, but not completely sure how. I think it had something to do with the steps below. I copied the "webui-macos-env.sh" file I had in my Automatic1111 install over. I commented out every line in it except two: I deleted my venv and then ran webui.sh. It installed those two versions of torch and torchvision. It then threw an error and crashed so I commented out that torch command. I did not delete the 1.12.1 install and then re-ran web.sh. It installed torch 2.0 and since then has been working. I left the export PYTORCH_ENABLE_MPS_FALLBACK=1 line uncommented as it solved a different error. Edit: Forgot to mention I'm running build 7c684a8. |
Maybe this discussion is helpful: |
based on community feedback so far, i've made two changes:
let me know what else do you think should be done to support apple platform? |
Instead of |
ok... tried some of the above stuff and now am getting: ModuleNotFoundError: No module named 'clip' |
I get the same - illegal hardware instruction when running ./webui.sh. Just to confirm with other users, what file are you running? |
I've tried to install it just now (Apr. 24th 10 AM GMT) from a fresh git clone. Various errors during install, mostly from not having xcode installed. Full log here on Pastebin After installing xcode all failed pip installs worked
Launched with
|
Following up on my earlier post, after the changes vladmandic made I removed the webui-macos-env.sh file (since what I was using it for is now built-in) and everything has been working for me. I have "Use full precision for VAE" checked under CUDA Settings (but not "Use full precision for model") and "Enable upcast sampling" under Stable Diffusion settings. I think those are the only settings I've changed from the default that would affect image generation. I'm currently running build 98adfb3. |
In my case the installation works fine on the M1. It does not download the default sd v.1.5 but I imported it to the SD folder so it works fine. As soon as I press the generate button I get an error message with the information that python crashed...I tried different models but nothing helped. |
Finally got it installed and running but cannot create images. errors: Initializing ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0% -:--:-- 0:00:00loc("mps_add"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/97f6331a-ba75-11ed-a4bc-863efbbaf80d/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":228:0)): error: input types 'tensor<2x1280xf32>' and 'tensor<1280xf16>' are not broadcast compatible |
Continue to #337 I'am able to generate image with DPM++ 2M Karras sampler after checking
then Apply Settings |
I finally got it. I had to swap the venv folder with the one from Automatic1111. It seems that the previous one was downgrading Python to 3.9.7 (from Python 3.10), therefore, it wasn't running. Once initialized, I changed the Sampler to Euler a. The default one is Unipc, which doesn't work and gives me the below error. |
512x512 image generation speed on MBP M1 16G: this project(--medvram AND upcast-sampling checked): Almost the same. |
It's normal that you get no module xformers as this is available only for NVIDIA GPUs. And we are using a macOS, which doesn't support an NVIDIA GPU. For CLIP, you install it manually from openai/clip (https://github.com/openai/CLIP). I am scared you will be asked to install other requirements after CLIP. If this is the case, you can run the requirements.txt file with the below command: Note that I had this already installed from the automatic1111, so you might want to use the same. |
Does it work for you to generate images and so on, if yes...what did you change that it works for you? In my case it looks good while running the sirup but as soon as I want to generate python quits... |
Yes, it works. I have followed what @hs1n suggested to do, i.e., changed the sampling method to something different from the default (UniPc), and enabled upcast sampling under Settings > CUDA Settings. |
Python does not crash anymore but I get this error: NotImplementedError: The operator 'aten::_linalg_solve_ex.result' is not
|
Yep. It seems you need to run this to use CPU only: export PYTORCH_ENABLE_MPS_FALLBACK=1 Before running the ./webui.sh The error suggests that you are trying to use a PyTorch function that is not yet implemented for the MPS device. MPS (Memory Pooling System) is a technology that allows for efficient memory management on NVIDIA GPUs, but not all PyTorch functions are currently supported on MPS. Otherwise, you can try to downgrade pytorch and rerun the file. I would also add the error in the link given. |
On my Mac M1 with 32GB, I was able to run it smoothly by simply enabling PYTORCH_ENABLE_MPS_FALLBACK=1, and it seems that there is no need to set (--no-half), (--no-half-vae), or (upcast sampling). However, I am not sure how it will perform on a device with 16GB. I have submitted the PR and will try it on a 16GB device later. |
FYI: I seem to have fixed my install problems by installing tensorflow-metal for Apple silicon... |
Had more issues this morning, installed PyTorch and it seems to have fixed the issues: |
Possible fix for experiencing
For If you want to check/install requirements manuallyIf you use
Plus, you can use
|
@nybe Are you able to run automatic1111 or Vlad Diffusion inside of miniconda? Because those instructions from Apple look promising. One issue I have is that pip3 is installing modules outside of the normal paths. I also think the Apple version of tensor flow with metal support might allow us to utilize our GPU's instead of just the CPUs (maybe even the NPUs too?) |
Hi, I'm new to installing from Github. I'm on a Mac Mini M1, 16GB Ram. I have no idea how to solve this as I do not have a background in coding or Github in general. Then, I tried installing automatic1111, and got this error: Any suggestions welcome! |
nothing to do with M1, those command line flags are removed and moved to UI settings. |
@moebis Been running A1111 right outta the box but I did install miniconda and it seems to have fixed alot of initial issues I was having running Vlad... I'm sure I've made a mess out of my Mac with all the stuff I've been trying but it's all smooth now (knock wood) |
Sorry I made a mistake of not installing Python and Git. I installed those, and got a lot of errors, here is the whole install process: 18:15:25-561048 INFO Python 3.10.11 on Darwin Original exception was: |
AFAIK, the most competent person on Apple Silicon compatibility & optimizations for Stable Diffusion/A1111 is This is his/her/their experimental fork of A1111 optimized for M1: https://github.com/brkirch/stable-diffusion-webui/tree/mac-builds-experimental I have an M2 Max with 96GB RAM and I'd be happy to test any configuration you want to test. |
I kneel. It works on Mac now, thank you so much for the devs! I am trying to img2img at 1408x1408 resolution (which works on A1111), but on Vladomatic I get this error:
Does anyone else get this issue? I am using the M2 Max with 64GB ram and gens at that res work on A1111. |
ok, so all community suggestions on what to do for defaults on M1 setups have been added and I haven't seen any further updates to those on this thread, so I'll close it. note that there are still quite a few reports on binary incompatibilities on some systems - unless they involve suggestions to install different version of a binary package causing errors, i really cannot help as i cannot debug underlying binary packages. |
Thanks, I've reached out - will see what happens. |
Sharing my anecdotal evidence using an M2 mac because I tried everything in this thread and nothing worked: I was using When I tried running it with a fresh install of |
Issue Description
Doesn't work at all on my M1 Macbook, many models crash the entire process, some start "generating" but at the end fail with the following error:
I tried changing some settings like
--no-half
and "Upcast cross attention layer to float32", didn't work. A1111 is working on Mac M1, not sure what you changed here.Version Platform Description
Commit Version: 57204b3
The text was updated successfully, but these errors were encountered: