This article is a mirror article of machine translation, please click here to jump to the original article.

View: 3511|Reply: 0

How to solve the Ollama model pull problem

[Copy link]
Posted on 2025-2-8 08:31:40 | | |
For the past few days I have had this annoying issue when trying to extract models from Ollama: I am running a command and it is downloading about 4-5% of the model, then the connection is reset, the client "crashes" and it always restarts from 0%. ollama pull deepseek-r1:8b

It seems that I am not alone:The hyperlink login is visible.

Note: I'm running Ollama 0.5.7 and they will most likely fix the issue in the next release.

solution

Someone has kindly posted a workaround, i.e. a bash script that calls the Ollama client and resumes the download from where it left off (the Ollama client should be able to do this, but it can't do it properly when it crashes).

Save this script in a file like thisollama-pull.shand make it executable with the following commandchmod +x ollama-pull.sh


Download the script directly: ollama-pull.sh (1.42 KB, Number of downloads: 0, 售价: 5 粒MB)

And run the script as follows:


(Replace deepseek-r1:8b with name:tag for the model you want to pull)

Original:The hyperlink login is visible.




Previous:[AI] (7) Use llama.cpp to deploy the DeepSeek-R1 model on-premises
Next:Xshell PLUS Business Software Purchase Tutorial
Disclaimer:
All software, programming materials or articles published by Code Farmer Network are only for learning and research purposes; The above content shall not be used for commercial or illegal purposes, otherwise, users shall bear all consequences. The information on this site comes from the Internet, and copyright disputes have nothing to do with this site. You must completely delete the above content from your computer within 24 hours of downloading. If you like the program, please support genuine software, purchase registration, and get better genuine services. If there is any infringement, please contact us by email.

Mail To:help@itsvse.com