Replies: 1 comment
-
Hello @tlam-lyra was your prompt maybe longer than API's maximum number of characters it allows? Or the model's maximum number of tokens it can process? We had a similar question recently answered by @anakin87 here: deepset-ai/haystack-core-integrations#912 (comment) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Apologies if this is more of a question for
haystack-core-integrations
repo instead of the main repo, but when I initializedAmazonBedrockGenerator
like soI expected the prompt and the generated text to be not truncated. However, that did not turn out to be the case for some of the prompts. Is this the right assumption and there is something else in my LLM pipeline that is causing this truncation issue or am I missing additional keyword arguments to init
AmazonBedrockGenerator
?Beta Was this translation helpful? Give feedback.
All reactions