Replies: 3 comments
-
Downloading to test now. But it looks like this model was developed in partnership with Nvidia, and I wouldnt put it past them to purposely do something to the model outside what is normal to break ROCm. Edit: Yes. This LLM in particular does not work. Tried 3 separate quants of this model and none of them worked. |
Beta Was this translation helpful? Give feedback.
-
In the latest version 1.71, the Mistral-Nemo-Instruct GUF works fine. Thanks to the author of koboldcpp-rocm!!! I tested the Mistral-Nemo and I really liked this model. It fits completely on the Radeon RX 6800 XT and therefore works very fast. The quality of Mistral-Nemo responses is higher than that of Lama 2 by 20 billion parameters. |
Beta Was this translation helpful? Give feedback.
-
Greetings,
I've seen lots of people saying how great this new model is, but I keep getting an unhandled exception every time I try. Has anyone had any luck with this model or one like it?
v1.70

v1.67

I'm running Windows 11 Pro, AMD 5 7600x, 64gb DDR5 4800 RAM. Thanks!
Beta Was this translation helpful? Give feedback.
All reactions