Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chatglm3测试结果差异大 #11

Open
jyjyjyjyjyjyj opened this issue Dec 20, 2023 · 1 comment
Open

chatglm3测试结果差异大 #11

jyjyjyjyjyjyj opened this issue Dec 20, 2023 · 1 comment

Comments

@jyjyjyjyjyjyj
Copy link

jyjyjyjyjyjyj commented Dec 20, 2023

上传网站后得分4.76
我的测试代码:
`
from inference.models import api_model
import json
import requests
from transformers import AutoTokenizer, AutoModel
#import zhipuai

#zhipuai.api_key = "" # TODO

class chatglm(api_model):
def init(self, workers=10):
#self.model_name = "chatglm_turbo" # TODO
self.temperature = 0.7
self.workers = workers

    model_path= "/v1/llm/models/chatglm3-6b/"
    self.tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
    self.model = AutoModel.from_pretrained(model_path, trust_remote_code=True, device="cuda")

def get_api_result(self, prompt):
    question = prompt["question"]
    temperature = prompt.get("temperature", self.temperature)

    output, history = self.model.chat(self.tokenizer, question, temperature=temperature, history=[])
    # response = zhipuai.model_api.invoke(
    #     model=self.model_name,
    #     prompt=single_turn_wrapper(question),
    #     temperature=temperature
    # )
    # output = response.get("data").get("choices")[0].get("content")
    return output

`

@FoolMark
Copy link

我测出来和你的结果差不多,可能官方的system prompt有区别或者genConfig里面其他参数调优了

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants