We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
上传网站后得分4.76 我的测试代码: ` from inference.models import api_model import json import requests from transformers import AutoTokenizer, AutoModel #import zhipuai
#zhipuai.api_key = "" # TODO
class chatglm(api_model): def init(self, workers=10): #self.model_name = "chatglm_turbo" # TODO self.temperature = 0.7 self.workers = workers
model_path= "/v1/llm/models/chatglm3-6b/" self.tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) self.model = AutoModel.from_pretrained(model_path, trust_remote_code=True, device="cuda") def get_api_result(self, prompt): question = prompt["question"] temperature = prompt.get("temperature", self.temperature) output, history = self.model.chat(self.tokenizer, question, temperature=temperature, history=[]) # response = zhipuai.model_api.invoke( # model=self.model_name, # prompt=single_turn_wrapper(question), # temperature=temperature # ) # output = response.get("data").get("choices")[0].get("content") return output
`
The text was updated successfully, but these errors were encountered:
我测出来和你的结果差不多,可能官方的system prompt有区别或者genConfig里面其他参数调优了
Sorry, something went wrong.
No branches or pull requests
上传网站后得分4.76
我的测试代码:
`
from inference.models import api_model
import json
import requests
from transformers import AutoTokenizer, AutoModel
#import zhipuai
#zhipuai.api_key = "" # TODO
class chatglm(api_model):
def init(self, workers=10):
#self.model_name = "chatglm_turbo" # TODO
self.temperature = 0.7
self.workers = workers
`
The text was updated successfully, but these errors were encountered: