Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Models not downloading? #101

Open
SailingB0aty opened this issue Mar 18, 2023 · 29 comments
Open

Models not downloading? #101

SailingB0aty opened this issue Mar 18, 2023 · 29 comments

Comments

@SailingB0aty
Copy link

I run the "npx dalai alpaca install 7B" command, and cmd seems to just stop, without downloading the model. Anyone have any ideas? Thanks!

9259be87187d661a19bf694cc6dfe08a

@dwunger
Copy link

dwunger commented Mar 18, 2023

Same issue. Fixed. Posted a snip below

@Ionaut
Copy link

Ionaut commented Mar 18, 2023

Same issue as well

@mybrainsshit
Copy link

the screenshot above looks like your using windows powershell, which is stated in the instructions to not work, use the regular cmd.exe instead ? But even on cmd its not working for me, gets stuck on downloading.

@SailingB0aty
Copy link
Author

I am running CMD, but you're right, it does seem to be running some of the commands through powershell, and im not sure why that is.

@dwunger
Copy link

dwunger commented Mar 18, 2023

Fix:
Running the command from the user directory root fixed the issue.

EDIT:
Yes this was the issue. Don't run it from the dalai directory. It needs to be up one level. This worked on both my root and user directory for primary and secondary drives.
image

@SailingB0aty
Copy link
Author

Fix: Running the command from the user directory root fixed the issue.

EDIT: Yes this was the issue. Don't run it from the dalai directory. It needs to be up one level. This worked on both my root and user directory for primary and secondary drives. image

This didnt work for me unfortunately

@mpl-cwassen
Copy link

mpl-cwassen commented Mar 18, 2023

So, I am updating this. I found out my Malwarebytes was causing the models to fail. When I turned it off, it downloaded like normal. May be something to check.

@Ionaut
Copy link

Ionaut commented Mar 18, 2023

I've tried install from the command prompt in the root user directory with all firewalls turned off and nothing has worked . Also tried running cmd as administrator and reinstalling Visual Studio. How did you guys troubleshoot this?

@GopalSaraf
Copy link

Model not downloading properly?

Here is the solution..

As model is downloading from https://ipfs.io/ipfs/QmQ1bf2BTnYxq73MFJWu1B7bQ2UD6qG7D7YDCxhTndVkPC
We can use colab to download model in colab ( with google servers ) and upload it to our google drive.
Then we can download model from google drive ( as it is fast ) and move it to dalai (for alpaca 7B - dalai/alpaca/models/7B)

Here is what you have to do..

  1. Create new colab notebook.
  2. Connect it to google drive ( Just click mount google drive button in files ).
  3. Execute following line ( for alpaca 7B ).
    !wget -O /content/drive/MyDrive/ggml-model-q4_0.bin -c https://ipfs.io/ipfs/QmQ1bf2BTnYxq73MFJWu1B7bQ2UD6qG7D7YDCxhTndVkPC
  4. It will download model in google drive.
  5. Download it from there and move it to dalai/alpaca/models/7B.

@JCBsystem
Copy link

JCBsystem commented Mar 19, 2023

sorry maybe a stupid Q. where is "dalai/alpaca/models/7B" on a mac
inside the "llama.cpp" ?

@GopalSaraf
Copy link

Search for the model in dalai folder. replace it with it.

@JCBsystem
Copy link

thx found it , it all workd inc DL when using with npm instead of npx

@Ionaut
Copy link

Ionaut commented Mar 19, 2023

Great so is there a link for llama 7B as well?

@maphew
Copy link

maphew commented Mar 21, 2023

Where should the file be placed after a manual download when using a local machine?

@mcmcford
Copy link

Where should the file be placed after a manual download when using a local machine?

I believe the default download location is your users folder so it should be something along the lines of
C:\Users\<your username>\dalai\alpaca\models\7B

C:/
└── Users/
    └── <your username>/
        └── dalai/
            └── alpaca/
                └── models/
                    └── 7B/
                        └── ggml-model-q4_0.bin (model file)

@dmchmk
Copy link

dmchmk commented Mar 22, 2023

The easiest way to workaround the issue, that I found yesterday - is just to disable downloading files during installation by commenting out one line in llama binary:

vim node_modules/dalai/llama.js

    const venv_path = path.join(this.root.home, "venv")
    const python_path = platform == "win32" ? path.join(venv_path, "Scripts", "python.exe") : path.join(venv_path, 'bin', 'python')
    /**************************************************************************************************************
    *
    * 5. Download models + convert + quantize
    *
    **************************************************************************************************************/
    for(let model of models) {
//      await this.download(model)
      const outputFile = path.resolve(this.home, 'models', model, 'ggml-model-f16.bin')
      // if (fs.existsSync(outputFile)) {
      //   console.log(`Skip conversion, file already exists: ${outputFile}`)
      // } else {
        await this.root.exec(`${python_path} convert-pth-to-ggml.py models/${model}/ 1`, this.home)

await this.download(model)

this line. After which you can copy your torrent downloaded model files to

ls -lah dalai/llama/models/
total 520K
drwxr-xr-x 6 llama llama 4.0K Mar  6 08:58 .
drwxrwxr-x 7 llama llama 4.0K Mar 21 12:33 ..
drwxr-xr-x 2 llama llama 4.0K Mar 21 15:50 13B
drwxr-xr-x 2 llama llama 4.0K Mar  6 08:45 30B
drwxr-xr-x 2 llama llama 4.0K Mar 21 18:02 65B
drwxr-xr-x 2 llama llama 4.0K Mar  6 08:07 7B
-rw-r--r-- 1 llama llama   50 Mar  5 10:34 tokenizer_checklist.chk
-rw-r--r-- 1 llama llama 489K Mar  5 10:36 tokenizer.model

And run dalai llama install <prefered-model> to start quantification process.

@TheFlymo
Copy link

The easiest way to workaround the issue, that I found yesterday - is just to disable downloading files during installation by commenting out one line in llama binary:

This worked for me- many thanks!

@maphew
Copy link

maphew commented Mar 22, 2023

Great so is there a link for llama 7B as well?

I found more alpaca manual download links in the https://github.com/cocktailpeanut/alpaca.cpp readme (excerpt below). I'm searching for similar links for llama 7B but so far no luck. I'll edit this post if I do find any.

Update: There's a lot of help in the Discord channels, including direct download links: https://discord.com/channels/1052861702561075230/1086836115555753994/1086836115555753994


My download is really slow or keeps getting interrupted
Try downloading the model(s) manually through the browser
LLaMA
7B
https://agi.gpt4.org/llama/LLaMA/7B/consolidated.00.pth

13B
https://agi.gpt4.org/llama/LLaMA/13B/consolidated.00.pth
https://agi.gpt4.org/llama/LLaMA/13B/consolidated.01.pth

30B
https://agi.gpt4.org/llama/LLaMA/30B/consolidated.00.pth
https://agi.gpt4.org/llama/LLaMA/30B/consolidated.01.pth
https://agi.gpt4.org/llama/LLaMA/30B/consolidated.02.pth
https://agi.gpt4.org/llama/LLaMA/30B/consolidated.03.pth

65B
https://agi.gpt4.org/llama/LLaMA/65B/consolidated.00.pth
https://agi.gpt4.org/llama/LLaMA/65B/consolidated.01.pth
https://agi.gpt4.org/llama/LLaMA/65B/consolidated.02.pth
https://agi.gpt4.org/llama/LLaMA/65B/consolidated.03.pth
https://agi.gpt4.org/llama/LLaMA/65B/consolidated.04.pth
https://agi.gpt4.org/llama/LLaMA/65B/consolidated.05.pth
https://agi.gpt4.org/llama/LLaMA/65B/consolidated.06.pth
https://agi.gpt4.org/llama/LLaMA/65B/consolidated.07.pth

Alpaca
Choose one of these 3 links
https://gateway.estuary.tech/gw/ipfs/QmQ1bf2BTnYxq73MFJWu1B7bQ2UD6qG7D7YDCxhTndVkPC
https://ipfs.io/ipfs/QmQ1bf2BTnYxq73MFJWu1B7bQ2UD6qG7D7YDCxhTndVkPC
https://cloudflare-ipfs.com/ipfs/QmQ1bf2BTnYxq73MFJWu1B7bQ2UD6qG7D7YDCxhTndVkPC

How to put models into the right folder:
You will first need to locate your Dalai folder. On windows, it will be %USERPROFILE%\dalai. On MacOS and Linux, it will be ~/dalai
Move the models into their corresponding folders:
LLaMA
7B: dalai/llama/models/7B
13B: dalai/llama/models/13B
30B: dalai/llama/models/30B
65B: dalai/llama/models/65B

Alpaca
7B: dalai/alpaca/models/7B

After doing this, run npx dalai llama install 7B (replace llama and 7B with your corresponding model) The script will continue the process


@haraphat01
Copy link

I have similar issue.

@aamiraym
Copy link

I'm getting this error

#!/usr/bin/env node
;(function () { // wrapper in case we're in module_context mode
// windows: running "npm blah" in this folder will invoke WSH, not node.
/global WScript/
if (typeof WScript !== 'undefined') {
WScript.echo(
'npm does not work when run\n' +
'with the Windows Scripting Host\n\n' +
"'cd' to a different directory,\n" +
"or type 'npm.cmd ',\n" +
"or type 'node npm '."
)
WScript.quit(1)
return
}

process.title = 'npm'

var unsupported = require('../lib/utils/unsupported.js')
unsupported.checkForBrokenNode()

var log = require('npmlog')
log.pause() // will be unpaused when config is loaded.
log.info('it worked if it ends with', 'ok')

unsupported.checkForUnsupportedNode()

if (!unsupported.checkVersion(process.version).unsupported) {
var updater = require('update-notifier')
var pkg = require('../package.json')
updater({pkg: pkg}).notify({defer: true})
}

var path = require('path')
var npm = require('../lib/npm.js')
var npmconf = require('../lib/config/core.js')
var errorHandler = require('../lib/utils/error-handler.js')
var output = require('../lib/utils/output.js')

var configDefs = npmconf.defs
var shorthands = configDefs.shorthands
var types = configDefs.types
var nopt = require('nopt')

// if npm is called as "npmg" or "npm_g", then
// run in global mode.
if (path.basename(process.argv[1]).slice(-1) === 'g') {
process.argv.splice(1, 1, 'npm', '-g')
}

log.verbose('cli', process.argv)

var conf = nopt(types, shorthands)
npm.argv = conf.argv.remain
if (npm.deref(npm.argv[0])) npm.command = npm.argv.shift()
else conf.usage = true

if (conf.version) {
console.log(npm.version)
return errorHandler.exit(0)
}

if (conf.versions) {
npm.command = 'version'
conf.usage = false
npm.argv = []
}

log.info('using', 'npm@%s', npm.version)
log.info('using', 'node@%s', process.version)

process.on('uncaughtException', errorHandler)

if (conf.usage && npm.command !== 'help') {
npm.argv.unshift(npm.command)
npm.command = 'help'
}

// now actually fire up npm and run the command.
// this is how to use npm programmatically:
conf._exit = true
npm.load(conf, function (er) {
if (er) return errorHandler(er)
npm.commands[npm.command](npm.argv, function (err) {
// https://www.youtube.com/watch?v=7nfPu8qTiQU
if (!err && npm.config.get('ham-it-up') && !npm.config.get('json') && !npm.config.get('parseable') && npm.command !== 'completion') {
output('\n 🎵 I Have the Honour to Be Your Obedient Servant,🎵 ~ npm 📜🖋\n')
}
errorHandler.apply(this, arguments)
})
})
})()

@X10NLUN1X
Copy link

Great so is there a link for llama 7B as well?

I found more alpaca manual download links in the https://github.com/cocktailpeanut/alpaca.cpp readme (excerpt below). I'm searching for similar links for llama 7B but so far no luck. I'll edit this post if I do find any.

Update: There's a lot of help in the Discord channels, including direct download links: https://discord.com/channels/1052861702561075230/1086836115555753994/1086836115555753994

My download is really slow or keeps getting interrupted Try downloading the model(s) manually through the browser LLaMA 7B https://agi.gpt4.org/llama/LLaMA/7B/consolidated.00.pth

13B https://agi.gpt4.org/llama/LLaMA/13B/consolidated.00.pth https://agi.gpt4.org/llama/LLaMA/13B/consolidated.01.pth

30B https://agi.gpt4.org/llama/LLaMA/30B/consolidated.00.pth https://agi.gpt4.org/llama/LLaMA/30B/consolidated.01.pth https://agi.gpt4.org/llama/LLaMA/30B/consolidated.02.pth https://agi.gpt4.org/llama/LLaMA/30B/consolidated.03.pth

65B https://agi.gpt4.org/llama/LLaMA/65B/consolidated.00.pth https://agi.gpt4.org/llama/LLaMA/65B/consolidated.01.pth https://agi.gpt4.org/llama/LLaMA/65B/consolidated.02.pth https://agi.gpt4.org/llama/LLaMA/65B/consolidated.03.pth https://agi.gpt4.org/llama/LLaMA/65B/consolidated.04.pth https://agi.gpt4.org/llama/LLaMA/65B/consolidated.05.pth https://agi.gpt4.org/llama/LLaMA/65B/consolidated.06.pth https://agi.gpt4.org/llama/LLaMA/65B/consolidated.07.pth

Alpaca Choose one of these 3 links https://gateway.estuary.tech/gw/ipfs/QmQ1bf2BTnYxq73MFJWu1B7bQ2UD6qG7D7YDCxhTndVkPC https://ipfs.io/ipfs/QmQ1bf2BTnYxq73MFJWu1B7bQ2UD6qG7D7YDCxhTndVkPC https://cloudflare-ipfs.com/ipfs/QmQ1bf2BTnYxq73MFJWu1B7bQ2UD6qG7D7YDCxhTndVkPC

How to put models into the right folder: You will first need to locate your Dalai folder. On windows, it will be %USERPROFILE%\dalai. On MacOS and Linux, it will be ~/dalai Move the models into their corresponding folders: LLaMA 7B: dalai/llama/models/7B 13B: dalai/llama/models/13B 30B: dalai/llama/models/30B 65B: dalai/llama/models/65B

Alpaca 7B: dalai/alpaca/models/7B

After doing this, run npx dalai llama install 7B (replace llama and 7B with your corresponding model) The script will continue the process

after doing so, it ignores my consolidated.pth data and redownload it instead installing it. any solution ?

@tschmidt-git
Copy link

Great so is there a link for llama 7B as well?

I found more alpaca manual download links in the https://github.com/cocktailpeanut/alpaca.cpp readme (excerpt below). I'm searching for similar links for llama 7B but so far no luck. I'll edit this post if I do find any.

Update: There's a lot of help in the Discord channels, including direct download links: https://discord.com/channels/1052861702561075230/1086836115555753994/1086836115555753994

My download is really slow or keeps getting interrupted Try downloading the model(s) manually through the browser LLaMA 7B https://agi.gpt4.org/llama/LLaMA/7B/consolidated.00.pth

13B https://agi.gpt4.org/llama/LLaMA/13B/consolidated.00.pth https://agi.gpt4.org/llama/LLaMA/13B/consolidated.01.pth

30B https://agi.gpt4.org/llama/LLaMA/30B/consolidated.00.pth https://agi.gpt4.org/llama/LLaMA/30B/consolidated.01.pth https://agi.gpt4.org/llama/LLaMA/30B/consolidated.02.pth https://agi.gpt4.org/llama/LLaMA/30B/consolidated.03.pth

65B https://agi.gpt4.org/llama/LLaMA/65B/consolidated.00.pth https://agi.gpt4.org/llama/LLaMA/65B/consolidated.01.pth https://agi.gpt4.org/llama/LLaMA/65B/consolidated.02.pth https://agi.gpt4.org/llama/LLaMA/65B/consolidated.03.pth https://agi.gpt4.org/llama/LLaMA/65B/consolidated.04.pth https://agi.gpt4.org/llama/LLaMA/65B/consolidated.05.pth https://agi.gpt4.org/llama/LLaMA/65B/consolidated.06.pth https://agi.gpt4.org/llama/LLaMA/65B/consolidated.07.pth

Alpaca Choose one of these 3 links https://gateway.estuary.tech/gw/ipfs/QmQ1bf2BTnYxq73MFJWu1B7bQ2UD6qG7D7YDCxhTndVkPC https://ipfs.io/ipfs/QmQ1bf2BTnYxq73MFJWu1B7bQ2UD6qG7D7YDCxhTndVkPC https://cloudflare-ipfs.com/ipfs/QmQ1bf2BTnYxq73MFJWu1B7bQ2UD6qG7D7YDCxhTndVkPC

How to put models into the right folder: You will first need to locate your Dalai folder. On windows, it will be %USERPROFILE%\dalai. On MacOS and Linux, it will be ~/dalai Move the models into their corresponding folders: LLaMA 7B: dalai/llama/models/7B 13B: dalai/llama/models/13B 30B: dalai/llama/models/30B 65B: dalai/llama/models/65B

Alpaca 7B: dalai/alpaca/models/7B

After doing this, run npx dalai llama install 7B (replace llama and 7B with your corresponding model) The script will continue the process

what are the steps you take for manually downloading and then installing?

Start install via npx, close the shell, download the models manually, place them in the correct folder (which was already created via the first install attempt), then open shell and run npx dalai llama 7B again?

@wei-ann-Github
Copy link

I am running CMD, but you're right, it does seem to be running some of the commands through powershell, and im not sure why that is.

I highly recommend using WSL2 on Windows. And use it with Windows Terminal download from the Microsoft Store on your Windows 10 or later. WSL2 has been out for a few years already.

@bombaglad
Copy link

same for me, it's impossible to download anything

@cap-nestor
Copy link

The easiest way to workaround the issue, that I found yesterday - is just to disable downloading files during installation by commenting out one line in llama binary:

vim node_modules/dalai/llama.js

    const venv_path = path.join(this.root.home, "venv")
    const python_path = platform == "win32" ? path.join(venv_path, "Scripts", "python.exe") : path.join(venv_path, 'bin', 'python')
    /**************************************************************************************************************
    *
    * 5. Download models + convert + quantize
    *
    **************************************************************************************************************/
    for(let model of models) {
//      await this.download(model)
      const outputFile = path.resolve(this.home, 'models', model, 'ggml-model-f16.bin')
      // if (fs.existsSync(outputFile)) {
      //   console.log(`Skip conversion, file already exists: ${outputFile}`)
      // } else {
        await this.root.exec(`${python_path} convert-pth-to-ggml.py models/${model}/ 1`, this.home)

await this.download(model)

this line. After which you can copy your torrent downloaded model files to

ls -lah dalai/llama/models/
total 520K
drwxr-xr-x 6 llama llama 4.0K Mar  6 08:58 .
drwxrwxr-x 7 llama llama 4.0K Mar 21 12:33 ..
drwxr-xr-x 2 llama llama 4.0K Mar 21 15:50 13B
drwxr-xr-x 2 llama llama 4.0K Mar  6 08:45 30B
drwxr-xr-x 2 llama llama 4.0K Mar 21 18:02 65B
drwxr-xr-x 2 llama llama 4.0K Mar  6 08:07 7B
-rw-r--r-- 1 llama llama   50 Mar  5 10:34 tokenizer_checklist.chk
-rw-r--r-- 1 llama llama 489K Mar  5 10:36 tokenizer.model

And run dalai llama install <prefered-model> to start quantification process.

One possible solution is to add a flag (like --no-download) to deploy without downloading, or checking checksums and asking the user if they want to download again. This could be useful for slow connections (so you bring the model in an external drive and move it manually

@sreeshas
Copy link

Same for me. It retries downloading when i copy paste consolidated.00.pth from downloads folder to models/7B folder
Im trying to skip the download step.

I also tried closing and reopening the shell, it did not make a difference. This is on M2 MAC

@aryan-gupta
Copy link

I believe the issue is that the code temporarily moves the dalai/llama/models/ folder into dalai/tmp/models/ before starting the download. When you copy the files into dalai/llama/models/ folder it gets moved and thus fails. The way I fixed it is by adding these lines:

//      await this.download(model)
      console.log(`copy files in now`)
      await new Promise(resolve => setTimeout(resolve, 200000));

This gives the user 200K ms time to copy the files necessary as stated in the previous replies. You can adjust the timing accordingly.

@ClockLoop
Copy link

I wasn't able to find the code for the linux downloader to put in that wait code,
so i found that once I started the first file download,
i was able to put the rest of the files in that same folder, the downloader would skip the rest of the files in the folder and then process them. This is really only useful if you are downloading the multiple file models.

@hanebuechenes
Copy link

Still the same problem...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests