Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix max tokens and add model options #436

Open
wants to merge 10 commits into
base: main
Choose a base branch
from

Conversation

RedwindA
Copy link

1.max_tokens没有被正确传入,该版本传入了参数,使用了Math.min方法确保max_tokens始终合法
2.添加了一个额外的配置,专用于生成标题,设置了温度为0以得到稳定的标题
3.恢复了被注释掉的模型选项,因为不同模型的能力和速度的确有差异

你可以先在这里测试:https://redwinda.github.io/BetterChatGPT-origin/

English verion:

  1. The parameter max_tokens was not correctly passed, but in this version it has been passed and the Math.min method has been used to ensure that max_tokens is always valid.
  2. An additional configuration has been added specifically for generating titles, with a temperature of 0 set to obtain stable titles.
  3. The commented-out model options have been restored because there are indeed differences in the capabilities and speed of different models.

You can test it here first: https://redwinda.github.io/BetterChatGPT-origin/

@jackschedel
Copy link
Contributor

previously max tokens was implemented limiting model context, rather than the API max return token. As far as I know, this is intended by the author.
I chose to implement a separate max_tokens and max_context in my fork

@RedwindA
Copy link
Author

RedwindA commented Aug 18, 2023

https://platform.openai.com/docs/api-reference/chat/create#chat/create-max_tokens
OpenAI's API does not seem to support the parameter "max_context," and I did not see how you limit the size of the message sent based on "max_context" in your fork. Can you please explain in more detail how this parameter works?

previously max tokens was implemented limiting model context, rather than the API max return token. As far as I know, this is intended by the author. I chose to implement a separate max_tokens and max_context in my fork

@RedwindA
Copy link
Author

In addition, the parameter functionality implemented in my pull request is consistent with the author's current description of the parameter (only controlling the length of the latest message), so I believe this is the functionality they currently want

@jackschedel
Copy link
Contributor

@RedwindA they pass it through a function limitMessageTokens that iterates through the messages starting at the most recent and prunes it so that it shows the max amount of complete messages before its token count would hit the max_tokens value.

I agree that their description of the max_tokens parameter is not consistent with what they are actually doing.

@jackschedel
Copy link
Contributor

Imo properly implementing this fix means having separate max tokens and max context config options.

@RedwindA
Copy link
Author

damn, I didn't even notice the function limitMessageTokens, thank you for reminding me. They really did it in a confusing way, they shouldn't use a parameter with the same name as the official API parameter to perform a completely different function

@RedwindA they pass it through a function limitMessageTokens that iterates through the messages starting at the most recent and prunes it so that it shows the max amount of complete messages before its token count would hit the max_tokens value.

I agree that their description of the max_tokens parameter is not consistent with what they are actually doing.

@jackschedel
Copy link
Contributor

I had a pretty lengthy discussion about it in jackschedel/KoalaClient#34 that you might want to read.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants