Skip to content

Commit

Permalink
๐Ÿ’„ style: Update Bedrock model list & add AWS_BEDROCK_MODEL_LIST supโ€ฆ
Browse files Browse the repository at this point in the history
โ€ฆport (#3723)

* ๐Ÿ’„ style: Update Bedrock model list

* ๐Ÿ› fix: fix typo

* ๐Ÿ’„ style: add `AWS_BEDROCK_MODEL_LIST` support

* ๐Ÿ’„ style: Update Bedrock model list (Comments unsupported models)

* ๐Ÿ’„ style: rollback & update model description

* ๐Ÿ’„ style: sync desc & token num from Bedrock

* ๐Ÿ’„ style: update model list

* ๐Ÿ’„ style: update Titan displayName
  • Loading branch information
hezhijie0327 committed Sep 6, 2024
1 parent f738d10 commit 0aad972
Show file tree
Hide file tree
Showing 5 changed files with 159 additions and 48 deletions.
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ ENV \
# Anthropic
ANTHROPIC_API_KEY="" ANTHROPIC_PROXY_URL="" \
# Amazon Bedrock
AWS_ACCESS_KEY_ID="" AWS_SECRET_ACCESS_KEY="" AWS_REGION="" \
AWS_ACCESS_KEY_ID="" AWS_SECRET_ACCESS_KEY="" AWS_REGION="" AWS_BEDROCK_MODEL_LIST="" \
# Azure OpenAI
AZURE_API_KEY="" AZURE_API_VERSION="" AZURE_ENDPOINT="" AZURE_MODEL_LIST="" \
# Baichuan
Expand Down
2 changes: 1 addition & 1 deletion Dockerfile.database
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ ENV \
# Anthropic
ANTHROPIC_API_KEY="" ANTHROPIC_PROXY_URL="" \
# Amazon Bedrock
AWS_ACCESS_KEY_ID="" AWS_SECRET_ACCESS_KEY="" AWS_REGION="" \
AWS_ACCESS_KEY_ID="" AWS_SECRET_ACCESS_KEY="" AWS_REGION="" AWS_BEDROCK_MODEL_LIST="" \
# Azure OpenAI
AZURE_API_KEY="" AZURE_API_VERSION="" AZURE_ENDPOINT="" AZURE_MODEL_LIST="" \
# Baichuan
Expand Down
2 changes: 2 additions & 0 deletions src/config/llm.ts
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,7 @@ export const getLLMConfig = () => {
TOGETHERAI_MODEL_LIST: z.string().optional(),

ENABLED_AWS_BEDROCK: z.boolean(),
AWS_BEDROCK_MODEL_LIST: z.string().optional(),
AWS_REGION: z.string().optional(),
AWS_ACCESS_KEY_ID: z.string().optional(),
AWS_SECRET_ACCESS_KEY: z.string().optional(),
Expand Down Expand Up @@ -162,6 +163,7 @@ export const getLLMConfig = () => {
ZEROONE_MODEL_LIST: process.env.ZEROONE_MODEL_LIST,

ENABLED_AWS_BEDROCK: process.env.ENABLED_AWS_BEDROCK === '1',
AWS_BEDROCK_MODEL_LIST: process.env.AWS_BEDROCK_MODEL_LIST,
AWS_REGION: process.env.AWS_REGION,
AWS_ACCESS_KEY_ID: process.env.AWS_ACCESS_KEY_ID,
AWS_SECRET_ACCESS_KEY: process.env.AWS_SECRET_ACCESS_KEY,
Expand Down
189 changes: 144 additions & 45 deletions src/config/modelProviders/bedrock.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2,50 +2,42 @@ import { ModelProviderCard } from '@/types/llm';

// ref https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html
// ref https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html
// ref https://us-east-1.console.aws.amazon.com/bedrock/home?region=us-east-1#/models
// ref https://us-west-2.console.aws.amazon.com/bedrock/home?region=us-west-2#/models
const Bedrock: ModelProviderCard = {
chatModels: [
/*
// TODO: Not support for now
{
description: 'Amazon Titan Text Lite is a light weight efficient model ideal for fine-tuning for English-language tasks, including like summarization and copywriting, where customers want a smaller, more cost-effective model that is also highly customizable.',
displayName: 'Titan Text G1 - Lite',
id: 'amazon.titan-text-lite-v1:0:4k',
id: 'amazon.titan-text-lite-v1',
tokens: 4000,
},
{
description:
'Amazon Titan Text G1 - Express v1๏ผŒไธŠไธ‹ๆ–‡้•ฟๅบฆๅฏ่พพ 8000 ไธช token๏ผŒ้€‚ๅˆๅนฟๆณ›็š„็”จ้€”ใ€‚',
description: 'Amazon Titan Text Express has a context length of up to 8,000 tokens, making it well-suited for a wide range of advanced, general language tasks such as open-ended text generation and conversational chat, as well as support within Retrieval Augmented Generation (RAG). At launch, the model is optimized for English, with multilingual support for more than 100 additional languages available in preview.',
displayName: 'Titan Text G1 - Express',
id: 'amazon.titan-text-express-v1:0:8k',
id: 'amazon.titan-text-express-v1',
tokens: 8000,
},
{
displayName: 'Titan Text Premier',
id: 'amazon.titan-text-premier-v1:0:32K',
description: 'Titan Text Premier is a powerful and advanced model within the Titan Text family, designed to deliver superior performance across a wide range of enterprise applications. With its cutting-edge capabilities, it offers enhanced accuracy and exceptional results, making it an excellent choice for organizations seeking top-notch text processing solutions.',
displayName: 'Titan Text G1 - Premier',
id: 'amazon.titan-text-premier-v1:0',
tokens: 32_000,
},
*/
{
displayName: 'Jurassic-2 Mid',
enabled: true,
id: 'ai21.j2-mid-v1',
tokens: 8192,
},
{
displayName: 'Jurassic-2 Ultra',
enabled: true,
id: 'ai21.j2-ultra-v1',
tokens: 8192,
},
{
description:
'Claude 3 Opus ๆ˜ฏ Anthropic ๆœ€ๅผบๅคง็š„ไบบๅทฅๆ™บ่ƒฝๆจกๅž‹๏ผŒๅœจๅค„็†้ซ˜ๅบฆๅคๆ‚็š„ไปปๅŠกๆ–น้ขๅ…ทๅค‡้กถๅฐ–ๆ€ง่ƒฝใ€‚่ฏฅๆจกๅž‹่ƒฝๅคŸไปฅ้žๅ‡ก็š„ๆต็•…ๆ€งๅ’Œ็ฑปไผผไบบ็ฑป็š„็†่งฃ่ƒฝๅŠ›ๅผ•ๅฏผๅผ€ๆ”พๅผ็š„ๆ็คบๅ’Œๆœชๅฏ่ง็š„ๅœบๆ™ฏใ€‚Claude 3 Opus ๅ‘ๆˆ‘ไปฌๅฑ•็คบ็”Ÿๆˆๅผไบบๅทฅๆ™บ่ƒฝ็š„็พŽๅฅฝๅ‰ๆ™ฏใ€‚ Claude 3 Opus ๅฏไปฅๅค„็†ๅ›พๅƒๅ’Œ่ฟ”ๅ›žๆ–‡ๆœฌ่พ“ๅ‡บ๏ผŒๅนถไธ”ๆไพ› 200K ไธŠไธ‹ๆ–‡็ช—ๅฃใ€‚',
displayName: 'Claude 3 Opus',
description: 'Claude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations, with the speed and cost of our mid-tier model, Claude 3 Sonnet.',
displayName: 'Claude 3.5 Sonnet',
enabled: true,
functionCall: true,
id: 'anthropic.claude-3-opus-20240229-v1:0',
id: 'anthropic.claude-3-5-sonnet-20240620-v1:0',
tokens: 200_000,
vision: true,
},
{
description:
'Anthropic ๆŽจๅ‡บ็š„ Claude 3 Sonnet ๆจกๅž‹ๅœจๆ™บ่ƒฝๅ’Œ้€Ÿๅบฆไน‹้—ดๅ–ๅพ—็†ๆƒณ็š„ๅนณ่กก๏ผŒๅฐคๅ…ถๆ˜ฏๅœจๅค„็†ไผไธšๅทฅไฝœ่ดŸ่ฝฝๆ–น้ขใ€‚่ฏฅๆจกๅž‹ๆไพ›ๆœ€ๅคง็š„ๆ•ˆ็”จ๏ผŒๅŒๆ—ถไปทๆ ผไฝŽไบŽ็ซžไบ‰ไบงๅ“๏ผŒๅนถไธ”ๅ…ถ็ป่ฟ‡็ฒพๅฟƒ่ฎพ่ฎก๏ผŒๆ˜ฏๅคง่ง„ๆจก้ƒจ็ฝฒไบบๅทฅๆ™บ่ƒฝ็š„ๅฏไฟก่ต–ใ€้ซ˜่€ไน…ๆ€ง้ชจๅนฒๆจกๅž‹ใ€‚ Claude 3 Sonnet ๅฏไปฅๅค„็†ๅ›พๅƒๅ’Œ่ฟ”ๅ›žๆ–‡ๆœฌ่พ“ๅ‡บ๏ผŒๅนถไธ”ๆไพ› 200K ไธŠไธ‹ๆ–‡็ช—ๅฃใ€‚',
description: 'Claude 3 Sonnet by Anthropic strikes the ideal balance between intelligence and speedโ€”particularly for enterprise workloads. It offers maximum utility at a lower price than competitors, and is engineered to be the dependable, high-endurance workhorse for scaled AI deployments. Claude 3 Sonnet can process images and return text outputs, and features a 200K context window.',
displayName: 'Claude 3 Sonnet',
enabled: true,
functionCall: true,
Expand All @@ -54,18 +46,16 @@ const Bedrock: ModelProviderCard = {
vision: true,
},
{
description:
'Claude 3.5 Sonnet ๆ้ซ˜ไบ†่กŒไธš็š„ๆ™บ่ƒฝๆ ‡ๅ‡†, ๅœจๅนฟๆณ›็š„ๅŸบๅ‡†ๆต‹่ฏ•ไธญ่ถ…่ถŠไบ†็ซžไบ‰ๅฏนๆ‰‹ๆจกๅž‹ไปฅๅŠ Claude 3 Opus, ไปฅไธญ็ซฏๆจกๅž‹็š„้€Ÿๅบฆๅ’Œๆˆๆœฌ๏ผŒๅฑ•็Žฐๅ‡บๅ“่ถŠๆ€ง่ƒฝใ€‚ Claude 3.5 Sonnet ๅฏไปฅๅค„็†ๅ›พๅƒๅ’Œ่ฟ”ๅ›žๆ–‡ๆœฌ่พ“ๅ‡บ๏ผŒๅนถไธ”ๆไพ› 200K ไธŠไธ‹ๆ–‡็ช—ๅฃใ€‚',
displayName: 'Claude 3.5 Sonnet',
description: 'Claude 3 Opus is Anthropic most powerful AI model, with state-of-the-art performance on highly complex tasks. It can navigate open-ended prompts and sight-unseen scenarios with remarkable fluency and human-like understanding. Claude 3 Opus shows us the frontier of whatโ€™s possible with generative AI. Claude 3 Opus can process images and return text outputs, and features a 200K context window.',
displayName: 'Claude 3 Opus',
enabled: true,
functionCall: true,
id: 'anthropic.claude-3-5-sonnet-20240620-v1:0',
id: 'anthropic.claude-3-opus-20240229-v1:0',
tokens: 200_000,
vision: true,
},
{
description:
'Claude 3 Haiku ๆ˜ฏ Anthropic ๆœ€ๅฟซ้€Ÿใ€ๆœ€็ดงๅ‡‘็š„ๆจกๅž‹๏ผŒๅ…ทๆœ‰่ฟ‘ไนŽๅณๆ—ถ็š„ๅ“ๅบ”่ƒฝๅŠ›ใ€‚่ฏฅๆจกๅž‹ๅฏไปฅๅฟซ้€Ÿๅ›ž็ญ”็ฎ€ๅ•็š„ๆŸฅ่ฏขๅ’Œ่ฏทๆฑ‚ใ€‚ๅฎขๆˆทๅฐ†่ƒฝๅคŸๆž„ๅปบๆจกไปฟไบบ็ฑปไบคไบ’็š„ๆ— ็ผไบบๅทฅๆ™บ่ƒฝไฝ“้ชŒใ€‚ Claude 3 Haiku ๅฏไปฅๅค„็†ๅ›พๅƒๅ’Œ่ฟ”ๅ›žๆ–‡ๆœฌ่พ“ๅ‡บ๏ผŒๅนถไธ”ๆไพ› 200K ไธŠไธ‹ๆ–‡็ช—ๅฃใ€‚',
description: 'Claude 3 Haiku is Anthropic fastest, most compact model for near-instant responsiveness. It answers simple queries and requests with speed. Customers will be able to build seamless AI experiences that mimic human interactions. Claude 3 Haiku can process images and return text outputs, and features a 200K context window.',
displayName: 'Claude 3 Haiku',
enabled: true,
functionCall: true,
Expand All @@ -74,48 +64,157 @@ const Bedrock: ModelProviderCard = {
vision: true,
},
{
description:
'Claude 2.1 v2.1๏ผŒไธŠไธ‹ๆ–‡ๅคงๅฐ็ญ‰ไบŽ 200kใ€‚Claude 2 ็š„ๆ›ดๆ–ฐ็‰ˆๆœฌ๏ผŒ้‡‡็”จๅŒๅ€็š„ไธŠไธ‹ๆ–‡็ช—ๅฃ๏ผŒๅนถๅœจ้•ฟๆ–‡ๆกฃๅ’Œ RAG ไธŠไธ‹ๆ–‡ไธญๆ้ซ˜ๅฏ้ ๆ€งใ€ๅนป่ง‰็Ž‡ๅ’Œๅพช่ฏๅ‡†็กฎๆ€งใ€‚',
description: 'An update to Claude 2 that features double the context window, plus improvements across reliability, hallucination rates, and evidence-based accuracy in long document and RAG contexts.',
displayName: 'Claude 2.1',
id: 'anthropic.claude-v2:1',
tokens: 200_000,
},
{
description:
'Claude Instant 1.2 v1.2๏ผŒไธŠไธ‹ๆ–‡ๅคงๅฐ็ญ‰ไบŽ 100kใ€‚ไธ€็งๆ›ดๅฟซ้€Ÿใ€ๆ›ดๅฎžๆƒ ไฝ†ไป็„ถ้žๅธธๅผบๅคง็š„ๆจกๅž‹๏ผŒๅฎƒๅฏไปฅๅค„็†ไธ€็ณปๅˆ—ไปปๅŠก๏ผŒๅŒ…ๆ‹ฌ้šๆ„ๅฏน่ฏใ€ๆ–‡ๆœฌๅˆ†ๆžใ€ๆ‘˜่ฆๅ’Œๆ–‡ๆกฃ้—ฎ้ข˜ๅ›ž็ญ”ใ€‚',
displayName: 'Claude Instant 1.2',
description: 'Anthropic highly capable model across a wide range of tasks from sophisticated dialogue and creative content generation to detailed instruction following.',
displayName: 'Claude 2.0',
id: 'anthropic.claude-v2',
tokens: 100_000,
},
{
description: 'A fast, affordable yet still very capable model, which can handle a range of tasks including casual dialogue, text analysis, summarization, and document question-answering.',
displayName: 'Claude Instant',
id: 'anthropic.claude-instant-v1',
tokens: 100_000,
},
{
description: 'Mistral Large 2 128k',
displayName: 'Mistral Large 2',
description: 'An update to Meta Llama 3 8B Instruct that includes an expanded 128K context length, multilinguality and improved reasoning capabilities. The Llama 3.1 offering of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction-tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Llama 3.1 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. The Llama 3.1 models also support the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.',
displayName: 'Llama 3.1 8B Instruct',
enabled: true,
functionCall: true,
id: 'mistral.mistral-large-2407-v1:0',
id: 'meta.llama3-1-8b-instruct-v1:0',
tokens: 128_000,
},
{
description: 'Llama 3.1 405B Instruct',
description: 'An update to Meta Llama 3 70B Instruct that includes an expanded 128K context length, multilinguality and improved reasoning capabilities. The Llama 3.1 offering of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction-tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Llama 3.1 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. The Llama 3.1 models also support the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.',
displayName: 'Llama 3.1 70B Instruct',
enabled: true,
functionCall: true,
id: 'meta.llama3-1-70b-instruct-v1:0',
tokens: 128_000,
},
{
description: 'Meta Llama 3.1 405B Instruct is the largest and most powerful of the Llama 3.1 Instruct models that is a highly advanced model for conversational inference and reasoning, synthetic data generation, and a base to do specialized continual pre-training or fine-tuning on a specific domain. The Llama 3.1 offering of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction-tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Llama 3.1 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. The Llama 3.1 models also support the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.',
displayName: 'Llama 3.1 405B Instruct',
enabled: true,
functionCall: true,
id: 'meta.llama3-1-405b-instruct-v1:0',
tokens: 128_000,
},
{
description: 'Llama 3.1 70B Instruct',
displayName: 'Llama 3.1 70B Instruct',
description: 'Meta Llama 3 is an accessible, open large language model (LLM) designed for developers, researchers, and businesses to build, experiment, and responsibly scale their generative AI ideas. Part of a foundational system, it serves as a bedrock for innovation in the global community. Ideal for limited computational power and resources, edge devices, and faster training times.',
displayName: 'Llama 3 8B Instruct',
id: 'meta.llama3-8b-instruct-v1:0',
tokens: 8000,
},
{
description: 'Meta Llama 3 is an accessible, open large language model (LLM) designed for developers, researchers, and businesses to build, experiment, and responsibly scale their generative AI ideas. Part of a foundational system, it serves as a bedrock for innovation in the global community. Ideal for content creation, conversational AI, language understanding, R&D, and Enterprise applications.',
displayName: 'Llama 3 70B Instruct',
id: 'meta.llama3-70b-instruct-v1:0',
tokens: 8000,
},
/*
// TODO: Not support for now
{
description: 'A 7B dense Transformer, fast-deployed and easily customisable. Small, yet powerful for a variety of use cases. Supports English and code, and a 32k context window.',
displayName: 'Mistral 7B Instruct',
enabled: true,
id: 'meta.llama3-1-70b-instruct-v1:0',
id: 'mistral.mistral-7b-instruct-v0:2',
tokens: 32_000,
},
{
description: 'A 7B sparse Mixture-of-Experts model with stronger capabilities than Mistral 7B. Uses 12B active parameters out of 45B total. Supports multiple languages, code and 32k context window.',
displayName: 'Mixtral 8X7B Instruct',
enabled: true,
id: 'mistral.mixtral-8x7b-instruct-v0:1',
tokens: 32_000,
},
{
description: 'Mistral Small is perfectly suited for straightforward tasks that can be performed in bulk, such as classification, customer support, or text generation. It provides outstanding performance at a cost-effective price point.',
displayName: 'Mistral Small',
functionCall: true,
id: 'mistral.mistral-small-2402-v1:0',
tokens: 32_000,
},
{
description: 'Mistral Large 2407 is an advanced Large Language Model (LLM) that supports dozens of languages and is trained on 80+ coding languages. It has best-in-class agentic capabilities with native function calling JSON outputting and reasoning capabilities.',
displayName: 'Mistral Large 2 (24.07)',
enabled: true,
functionCall: true,
id: 'mistral.mistral-large-2407-v1:0',
tokens: 128_000,
},
{
description: 'Llama 3.1 8B Instruct',
displayName: 'Llama 3.1 8B Instruct',
description: 'The most advanced Mistral AI Large Language model capable of handling any language task including complex multilingual reasoning, text understanding, transformation, and code generation.',
displayName: 'Mistral Large',
enabled: true,
id: 'meta.llama3-1-8b-instruct-v1:0',
functionCall: true,
id: 'mistral.mistral-large-2402-v1:0',
tokens: 32_000,
},
*/
/*
// TODO: Not support for now
{
description: 'Command R+ is a highly performant generative language model optimized for large scale production workloads.',
displayName: 'Command R+',
enabled: true,
functionCall: true,
id: 'cohere.command-r-plus-v1:0',
tokens: 128_000,
},
{
description: 'Command R is a generative language model optimized for long-context tasks and large scale production workloads.',
displayName: 'Command R',
enabled: true,
functionCall: true,
id: 'cohere.command-r-v1:0',
tokens: 128_000,
},
*/
/*
// Cohere Command (Text) and AI21 Labs Jurassic-2 (Text) don't support chat with the Converse API
{
description: 'Command is Cohere flagship text generation model. It is trained to follow user commands and to be instantly useful in practical business applications.',
displayName: 'Command',
id: 'cohere.command-text-v14',
tokens: 4000,
},
{
description: 'Cohere Command-Light is a generative model that responds well with instruction-like prompts. This model provides customers with an unbeatable balance of quality, cost-effectiveness, and low-latency inference.',
displayName: 'Command Light',
id: 'cohere.command-light-text-v14',
tokens: 4000,
},
*/
/*
// TODO: Not support for now
{
description: 'The latest Foundation Model from AI21 Labs, Jamba-Instruct offers an impressive 256K context window and delivers the best value per price on core text generation, summarization, and question answering tasks for the enterprise.',
displayName: 'Jamba-Instruct',
id: 'ai21.jamba-instruct-v1:0',
tokens: 256_000,
},
*/
/*
// Cohere Command (Text) and AI21 Labs Jurassic-2 (Text) don't support chat with the Converse API
{
description: 'Jurassic-2 Mid is less powerful than Ultra, yet carefully designed to strike the right balance between exceptional quality and affordability. Jurassic-2 Mid can be applied to any language comprehension or generation task including question answering, summarization, long-form copy generation, advanced information extraction and many others.',
displayName: 'Jurassic-2 Mid',
id: 'ai21.j2-mid-v1',
tokens: 8191,
},
{
description: 'Jurassic-2 Ultra is AI21โ€™s most powerful model for complex tasks that require advanced text generation and comprehension. Popular use cases include question answering, summarization, long-form copy generation, advanced information extraction, and more.',
displayName: 'Jurassic-2 Ultra',
id: 'ai21.j2-ultra-v1',
tokens: 8191,
},
*/
],
checkModel: 'anthropic.claude-instant-v1',
id: 'bedrock',
Expand Down
Loading

0 comments on commit 0aad972

Please sign in to comment.