-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(grammar): add llama3.1 schema #3015
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
✅ Deploy Preview for localai ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
Signed-off-by: Ettore Di Giacinto <[email protected]>
Signed-off-by: Ettore Di Giacinto <[email protected]>
Signed-off-by: Ettore Di Giacinto <[email protected]>
Signed-off-by: Ettore Di Giacinto <[email protected]>
Signed-off-by: Ettore Di Giacinto <[email protected]>
Signed-off-by: Ettore Di Giacinto <[email protected]>
Signed-off-by: Ettore Di Giacinto <[email protected]>
dave-gray101
approved these changes
Jul 26, 2024
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the merging of grammars
down makes sense. llama 3.1 compatibility enhancements are a boon as well!
truecharts-admin
referenced
this pull request
in truecharts/charts
Jul 28, 2024
…9.3 by renovate (#24494) This PR contains the following updates: | Package | Update | Change | |---|---|---| | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | patch | `v2.19.2-aio-cpu` -> `v2.19.3-aio-cpu` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | patch | `v2.19.2-aio-gpu-nvidia-cuda-11` -> `v2.19.3-aio-gpu-nvidia-cuda-11` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | patch | `v2.19.2-aio-gpu-nvidia-cuda-12` -> `v2.19.3-aio-gpu-nvidia-cuda-12` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | patch | `v2.19.2-cublas-cuda11-ffmpeg-core` -> `v2.19.3-cublas-cuda11-ffmpeg-core` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | patch | `v2.19.2-cublas-cuda11-core` -> `v2.19.3-cublas-cuda11-core` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | patch | `v2.19.2-cublas-cuda12-ffmpeg-core` -> `v2.19.3-cublas-cuda12-ffmpeg-core` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | patch | `v2.19.2-cublas-cuda12-core` -> `v2.19.3-cublas-cuda12-core` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | patch | `v2.19.2-ffmpeg-core` -> `v2.19.3-ffmpeg-core` | | [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) | patch | `v2.19.2` -> `v2.19.3` | --- > [!WARNING] > Some dependencies could not be looked up. Check the Dependency Dashboard for more information. --- ### Release Notes <details> <summary>mudler/LocalAI (docker.io/localai/localai)</summary> ### [`v2.19.3`](https://togithub.com/mudler/LocalAI/releases/tag/v2.19.3) [Compare Source](https://togithub.com/mudler/LocalAI/compare/v2.19.2...v2.19.3) <!-- Release notes generated using configuration in .github/release.yml at master --> ##### What's Changed ##### Bug fixes 🐛 - fix(gallery): do not attempt to delete duplicate files by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3031](https://togithub.com/mudler/LocalAI/pull/3031) - fix(gallery): do clear out errors once displayed by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3033](https://togithub.com/mudler/LocalAI/pull/3033) ##### Exciting New Features 🎉 - feat(grammar): add llama3.1 schema by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3015](https://togithub.com/mudler/LocalAI/pull/3015) ##### 🧠 Models - models(gallery): add llama3.1-claude by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3005](https://togithub.com/mudler/LocalAI/pull/3005) - models(gallery): add darkidol llama3.1 by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3008](https://togithub.com/mudler/LocalAI/pull/3008) - models(gallery): add gemmoy by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3009](https://togithub.com/mudler/LocalAI/pull/3009) - chore: add function calling template for llama 3.1 models by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3010](https://togithub.com/mudler/LocalAI/pull/3010) - chore: models(gallery): ⬆️ update checksum by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3013](https://togithub.com/mudler/LocalAI/pull/3013) - models(gallery): add mistral-nemo by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3019](https://togithub.com/mudler/LocalAI/pull/3019) - models(gallery): add llama3.1-8b-fireplace2 by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3018](https://togithub.com/mudler/LocalAI/pull/3018) - models(gallery): add lumimaid-v0.2-12b by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3020](https://togithub.com/mudler/LocalAI/pull/3020) - models(gallery): add darkidol-llama-3.1-8b-instruct-1.1-uncensored-iq… by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3021](https://togithub.com/mudler/LocalAI/pull/3021) - models(gallery): add meta-llama-3.1-8b-instruct-abliterated by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3022](https://togithub.com/mudler/LocalAI/pull/3022) - models(gallery): add llama-3.1-70b-japanese-instruct-2407 by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3023](https://togithub.com/mudler/LocalAI/pull/3023) - models(gallery): add llama-3.1-8b-instruct-fei-v1-uncensored by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3024](https://togithub.com/mudler/LocalAI/pull/3024) - models(gallery): add openbuddy-llama3.1-8b-v22.1-131k by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3025](https://togithub.com/mudler/LocalAI/pull/3025) - models(gallery): add lumimaid-8b by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3026](https://togithub.com/mudler/LocalAI/pull/3026) - models(gallery): add llama3 with enforced functioncall with grammars by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3027](https://togithub.com/mudler/LocalAI/pull/3027) - chore(model-gallery): ⬆️ update checksum by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3036](https://togithub.com/mudler/LocalAI/pull/3036) ##### 👒 Dependencies - chore: ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3003](https://togithub.com/mudler/LocalAI/pull/3003) - chore: ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3012](https://togithub.com/mudler/LocalAI/pull/3012) - chore: ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3016](https://togithub.com/mudler/LocalAI/pull/3016) - chore: ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3030](https://togithub.com/mudler/LocalAI/pull/3030) - chore: ⬆️ Update ggerganov/whisper.cpp by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3029](https://togithub.com/mudler/LocalAI/pull/3029) - chore: ⬆️ Update ggerganov/llama.cpp by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3034](https://togithub.com/mudler/LocalAI/pull/3034) ##### Other Changes - docs: ⬆️ update docs version mudler/LocalAI by [@​localai-bot](https://togithub.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/3002](https://togithub.com/mudler/LocalAI/pull/3002) - refactor: break down json grammar parser in different files by [@​mudler](https://togithub.com/mudler) in [https://github.com/mudler/LocalAI/pull/3004](https://togithub.com/mudler/LocalAI/pull/3004) - fix: PR title tag for checksum checker script workflow by [@​dave-gray101](https://togithub.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/3014](https://togithub.com/mudler/LocalAI/pull/3014) **Full Changelog**: mudler/LocalAI@v2.19.2...v2.19.3 </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Enabled. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about these updates again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR has been generated by [Renovate Bot](https://togithub.com/renovatebot/renovate). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOC44LjMiLCJ1cGRhdGVkSW5WZXIiOiIzOC44LjMiLCJ0YXJnZXRCcmFuY2giOiJtYXN0ZXIiLCJsYWJlbHMiOlsiYXV0b21lcmdlIiwidXBkYXRlL2RvY2tlci9nZW5lcmFsL25vbi1tYWpvciJdfQ==-->
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This PR allows LocalAI to generate BNF rules to constrain the LLM output to generate valid JSON in the llama3.1 format. It also introduces a series of refactoring as such it is easier now to extend to other schemas to specifically generate a syntax.
The new schema, allows to force the LLM output to this format:
<function=example_function_name>{{"example_name": "example_value"}}</function>
which is common for Llama 3.1 function calling.How it works: to enable this behavior, set
schema_type
tollama3.1
, for instance:Forces the LLM to return function tool calls always with the llama3.1 format.
It keeps the behavior also with mixed grammar, I've tested with:
Notes for Reviewers
Signed commits