Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

facing problem in deploy demo locally #9

Open
kumarabhishek-github opened this issue May 29, 2023 · 1 comment
Open

facing problem in deploy demo locally #9

kumarabhishek-github opened this issue May 29, 2023 · 1 comment

Comments

@kumarabhishek-github
Copy link

kumarabhishek-github commented May 29, 2023

Hi Developers, Thank You for this wonderful job !!!

Trying to run DetGPT locally, following readme file but i feel steps are not clear please fix this.

  1. Installation
    git clone https://github.com/OptimalScale/DetGPT.git
    cd DetGPT
    conda create -n detgpt python=3.9 -y
    conda activate detgpt
    pip install -e .

Step 1 : Completed

  1. Install GroundingDino
    python -m pip install -e GroundingDINO
  2. Download the pretrained checkpoint
    cd output_models
    bash download.sh all
    cd -

content inside dir after running -> bash download.sh all
output_models/
coco_task_annotation.json
download.sh
pretrained_minigpt4_7b.pth
pretrained_minigpt4_13b.pth
task_tuned.pth

Step 2 : Completed

having problem here
-> Merge the robin lora model with the original llama model and save the merged model to output_models/robin-7b, w[here] (https://github.com/OptimalScale/DetGPT/blob/main/detgpt/configs/models/detgpt_robin_7b.yaml#L16) the corresponding model path is specified in this config file here.

To obtain the original llama model, one may refer to this doc. To merge a lora model with a base model, one may refer to PEFT or use the merge script provided by LMFlow.

Questios:

  1. How to merge the robin lora model with the original llama model?
  2. this output_models/robin-7b doesn't exist do i need to create this?
  3. merge a lora model with a base model, is llama model a base model? what is base model here?
  4. how to Replace 'path/to/pretrained_linear_weights' in the config file to the real path.

trying to use this script but not sure how to get this {huggingface-model-name-or-path-to-base-model} , {path-to-lora-model
and {path-to-merged-model}

python examples/merge_lora.py
--model_name_or_path {huggingface-model-name-or-path-to-base-model}
--lora_model_path {path-to-lora-model}
--output_model_path {path-to-merged-model}

what path to specify here -> ckpt: 'path/to/pretrained_linear_weights'

Please help in running this locally with required steps.

@shizhediao
Copy link
Contributor

shizhediao commented May 30, 2023

Hi,
Thanks for your interest!

  1. If you want to merge the robin lora model with the original llama model, please refer to this script. https://github.com/OptimalScale/LMFlow/blob/main/utils/apply_delta.py by running python apply_delta.py --base <path_to_llama_weights> --target <path_to_save> --delta <path_to_delta_weights>
  2. After merging the weights, say if you save the model to <path_to_save>. Then output_models/robin-7b should be replaced by <path_to_save>.
  3. yes, llama is the base model. Here base model means llama.
  4. path/to/pretrained_linear_weights should be replaced with http://lmflow.org:5000/detgpt/task_tuned.pth

We noticed there are some leaked weights in huggingface hub
If you want to save time, you may use them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants