-
Notifications
You must be signed in to change notification settings - Fork 906
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Segmentation Fault with 7b on Raspberry Pi 3 #213
Comments
Same here, segmentation fault, but on an old Linux x86_64 elitebook laptop. In this case it was -mavx causing the error. gcc -I. -O3 -DNDEBUG -std=c11 -fPIC -pthread -msse3 -c ggml.c -o ggml.o |
same problem here. working with a aws ec2. having the problem even without building from source. The file size is 4.0GB? may be the problem... |
Hello, I also have the same issue with a Raspberry Pi 4, 4GB I'm not sure the file size is the issue, as this blog got it working for a RPi 5 also with the same 4GB of RAM. |
Might want to trim down any running processes, also. Running non-graphical session, perhaps? |
Yeah so I run headless. I ended up using a 2-bit model instead that was 2+GB in size and it seemed to work. |
leafy@raspberrypi:~/alpaca.cpp $ ./chat
main: seed = 1681116282
llama_model_load: loading model from 'ggml-alpaca-7b-q4.bin' - please wait ...
llama_model_load: ggml ctx size = 6065.34 MB
Segmentation fault
Tried to run the 7b Alpaca Model on my Raspberry Pi 3, but getting a Segmentation Fault every time. Compiled from source code. The RP3 has 4 GBs of RAM, is that the problem?
The text was updated successfully, but these errors were encountered: