Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

problem when load model, image_processor, tokenizer using create_model_and_transforms() #306

Open
MrNquyen opened this issue Sep 19, 2024 · 0 comments

Comments

@MrNquyen
Copy link

model, image_processor, tokenizer = create_model_and_transforms(
        clip_vision_encoder_path="ViT-L-14",
        clip_vision_encoder_pretrained="openai",
        lang_encoder_path=model_path,
        tokenizer_path=model_path,
        cross_attn_every_n_layers=4,
        cache_dir="~/.cache",  # Defaults to ~/.cache
        # trust_remote_code=True,
    )

When I running the code sample in README.md, I am suffering with this problem.

Traceback (most recent call last):
File "/home/npl/ViInfographicCaps/code/lmm_models/flamingoGithub/code/fla.py", line 120, in
model, image_processor, tokenizer = load_model_transform()
^^^^^^^^^^^^^^^^^^^^^^
File "/home/npl/ViInfographicCaps/code/lmm_models/flamingoGithub/code/fla.py", line 20, in load_model_transform
model, image_processor, tokenizer = create_model_and_transforms(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/npl/ViInfographicCaps/vir_env/openfla/lib/python3.12/site-packages/open_flamingo/src/factory.py", line 54, in create_model_and_transforms
lang_encoder = AutoModelForCausalLM.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/npl/ViInfographicCaps/vir_env/openfla/lib/python3.12/site-packages/transformers/models/auto/auto_factory.py", line 524, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/npl/ViInfographicCaps/vir_env/openfla/lib/python3.12/site-packages/transformers/models/auto/configuration_auto.py", line 979, in from_pretrained
trust_remote_code = resolve_trust_remote_code(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/npl/ViInfographicCaps/vir_env/openfla/lib/python3.12/site-packages/transformers/dynamic_module_utils.py", line 626, in resolve_trust_remote_code
raise ValueError(
ValueError: The repository for anas-awadalla/mpt-1b-redpajama-200b contains custom code which must be executed to correctly load the model. You can inspect the repository content at https://hf.co/anas-awadalla/mpt-1b-redpajama-200b.
Please pass the argument trust_remote_code=True to allow custom code to be run.


When I pass the argument trust_remote_code=True to

lang_encoder = AutoModelForCausalLM.from_pretrained(
      lang_encoder_path, local_files_only=use_local_files, trust_remote_code=True
  )

I found this error

File "/home/npl/ViInfographicCaps/code/lmm_models/flamingoGithub/code/fla.py", line 120, in
model, image_processor, tokenizer = load_model_transform()
^^^^^^^^^^^^^^^^^^^^^^
File "/home/npl/ViInfographicCaps/code/lmm_models/flamingoGithub/code/fla.py", line 20, in load_model_transform
model, image_processor, tokenizer = create_model_and_transforms(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/npl/ViInfographicCaps/vir_env/openfla/lib/python3.12/site-packages/open_flamingo/src/factory.py", line 60, in create_model_and_transforms
decoder_layers_attr_name = _infer_decoder_layers_attr_name(lang_encoder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/npl/ViInfographicCaps/vir_env/openfla/lib/python3.12/site-packages/open_flamingo/src/factory.py", line 97, in _infer_decoder_layers_attr_name
raise ValueError(
ValueError: We require the attribute name for the nn.ModuleList in the decoder storing the transformer block layers. Please supply this string manually.

Is there any possible way to solve this problem.

@MrNquyen MrNquyen changed the title problem when load image using create_model_and_transforms() problem when load model, image_processor, tokenizer using create_model_and_transforms() Sep 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant