Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Local Llama llama.cpp #96

Merged
merged 1 commit into from
Sep 29, 2023
Merged

Conversation

NigelNelson
Copy link
Contributor

Context

The Local Llama tutorial uses the Llama.cpp repo, and previously breaking changes were introduced. In these changes, the model format was changed from GGML -> GGUF. These changes are now stable and there is enough GGUF support that the tutorial should be updated to stay relevant.

Description

Updates:

  • Remove hack needed for ARM64 support (Updated llama.cpp includes this fix now)
  • Correct all mentions and uses of GGML and use the GGUF equivalent.
  • Update the ./server output snippet

@NigelNelson NigelNelson added the enhancement New feature or request label Sep 28, 2023
@NigelNelson NigelNelson self-assigned this Sep 28, 2023
Copy link
Contributor

@jjomier jjomier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@NigelNelson NigelNelson merged commit dbe7951 into main Sep 29, 2023
3 checks passed
@NigelNelson NigelNelson deleted the nigeln/update_local_llama branch September 29, 2023 17:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants