Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump hf libraries versions #403

Merged
merged 7 commits into from
Jan 11, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions .github/workflows/test_inf1.yml
Original file line number Diff line number Diff line change
Expand Up @@ -42,16 +42,16 @@ jobs:
- name: Run CLI tests
run: |
source aws_neuron_venv_pytorch/bin/activate
HF_TOKEN_OPTIMUM_NEURON_CI=${{ secrets.HF_TOKEN_OPTIMUM_NEURON_CI }} pytest -m is_inferentia_test tests/cli
HF_TOKEN=${{ secrets.HF_TOKEN_OPTIMUM_NEURON_CI }} pytest -m is_inferentia_test tests/cli
- name: Run export tests
run: |
source aws_neuron_venv_pytorch/bin/activate
HF_TOKEN_OPTIMUM_NEURON_CI=${{ secrets.HF_TOKEN_OPTIMUM_NEURON_CI }} pytest -m is_inferentia_test tests/exporters
HF_TOKEN=${{ secrets.HF_TOKEN_OPTIMUM_NEURON_CI }} pytest -m is_inferentia_test tests/exporters
- name: Run inference tests
run: |
source aws_neuron_venv_pytorch/bin/activate
HF_TOKEN_OPTIMUM_NEURON_CI=${{ secrets.HF_TOKEN_OPTIMUM_NEURON_CI }} pytest -m is_inferentia_test tests/inference
HF_TOKEN=${{ secrets.HF_TOKEN_OPTIMUM_NEURON_CI }} pytest -m is_inferentia_test tests/inference
- name: Run pipelines tests
run: |
source aws_neuron_venv_pytorch/bin/activate
HF_TOKEN_OPTIMUM_NEURON_CI=${{ secrets.HF_TOKEN_OPTIMUM_NEURON_CI }} pytest -m is_inferentia_test tests/pipelines
HF_TOKEN=${{ secrets.HF_TOKEN_OPTIMUM_NEURON_CI }} pytest -m is_inferentia_test tests/pipelines
10 changes: 5 additions & 5 deletions .github/workflows/test_inf2.yml
Original file line number Diff line number Diff line change
Expand Up @@ -38,20 +38,20 @@ jobs:
- name: Run CLI tests
run: |
source aws_neuron_venv_pytorch/bin/activate
HF_TOKEN_OPTIMUM_NEURON_CI=${{ secrets.HF_TOKEN_OPTIMUM_NEURON_CI }} pytest -m is_inferentia_test tests/cli
HF_TOKEN=${{ secrets.HF_TOKEN_OPTIMUM_NEURON_CI }} pytest -m is_inferentia_test tests/cli
- name: Run exporters tests
run: |
source aws_neuron_venv_pytorch/bin/activate
HF_TOKEN_OPTIMUM_NEURON_CI=${{ secrets.HF_TOKEN_OPTIMUM_NEURON_CI }} pytest -m is_inferentia_test tests/exporters
JingyaHuang marked this conversation as resolved.
Show resolved Hide resolved
HF_TOKEN=${{ secrets.HF_TOKEN_OPTIMUM_NEURON_CI }} pytest -m is_inferentia_test tests/exporters
- name: Run inference tests
run: |
source aws_neuron_venv_pytorch/bin/activate
HF_TOKEN_OPTIMUM_NEURON_CI=${{ secrets.HF_TOKEN_OPTIMUM_NEURON_CI }} pytest -m is_inferentia_test tests/inference
HF_TOKEN=${{ secrets.HF_TOKEN_OPTIMUM_NEURON_CI }} pytest -m is_inferentia_test tests/inference
- name: Run generation tests
run: |
source aws_neuron_venv_pytorch/bin/activate
HF_TOKEN_OPTIMUM_NEURON_CI=${{ secrets.HF_TOKEN_OPTIMUM_NEURON_CI }} pytest -m is_inferentia_test tests/generation
HF_TOKEN=${{ secrets.HF_TOKEN_OPTIMUM_NEURON_CI }} pytest -m is_inferentia_test tests/generation
- name: Run pipelines tests
run: |
source aws_neuron_venv_pytorch/bin/activate
HF_TOKEN_OPTIMUM_NEURON_CI=${{ secrets.HF_TOKEN_OPTIMUM_NEURON_CI }} pytest -m is_inferentia_test tests/pipelines
HF_TOKEN=${{ secrets.HF_TOKEN_OPTIMUM_NEURON_CI }} pytest -m is_inferentia_test tests/pipelines
2 changes: 1 addition & 1 deletion .github/workflows/test_trainium_common.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,6 @@ jobs:
run: pip install .[tests,neuronx]
- name: Run tests on Neuron cores
run: |
HF_TOKEN_OPTIMUM_NEURON_CI=${{ secrets.HF_TOKEN_OPTIMUM_NEURON_CI }} USE_VENV="false" pytest -m "is_trainium_test" $TESTS_TO_IGNORE_FLAGS tests
HF_TOKEN=${{ secrets.HF_TOKEN_OPTIMUM_NEURON_CI }} USE_VENV="false" pytest -m "is_trainium_test" $TESTS_TO_IGNORE_FLAGS tests
- name: Run staging tests on Neuron cores
run: HUGGINGFACE_CO_STAGING=1 pytest -m "is_trainium_test and is_staging_test" $TESTS_TO_IGNORE_FLAGS tests -s
2 changes: 1 addition & 1 deletion .github/workflows/test_trainium_distributed.yml
Original file line number Diff line number Diff line change
Expand Up @@ -35,5 +35,5 @@ jobs:
run: pip install .[tests,neuronx]
- name: Run tests on Neuron cores
run: |
HF_TOKEN_OPTIMUM_NEURON_CI=${{ secrets.HF_TOKEN_OPTIMUM_NEURON_CI }} pytest -m "is_trainium_test" tests/distributed/
HF_TOKEN=${{ secrets.HF_TOKEN_OPTIMUM_NEURON_CI }} pytest -m "is_trainium_test" tests/distributed/
2 changes: 1 addition & 1 deletion .github/workflows/test_trainium_examples.yml
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ jobs:
run: pip install .[tests,neuronx]
- name: Run example tests on Neuron cores
run: |
HF_TOKEN_OPTIMUM_NEURON_CI=${{ secrets.HF_TOKEN_OPTIMUM_NEURON_CI }} USE_VENV=false COVERAGE=${{ github.event.inputs.priority }} RUN_TINY=$RUN_TINY RUN_SLOW=1 pytest -m "is_trainium_test" tests/test_examples.py -v
HF_TOKEN=${{ secrets.HF_TOKEN_OPTIMUM_NEURON_CI }} USE_VENV=false COVERAGE=${{ github.event.inputs.priority }} RUN_TINY=$RUN_TINY RUN_SLOW=1 pytest -m "is_trainium_test" tests/test_examples.py -v
stop-runner:
name: Stop self-hosted EC2 runner
needs:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -185,6 +185,8 @@ def __call__(
# 1. Check inputs
self.check_inputs(
prompt,
image,
mask_image,
height,
width,
strength,
Expand Down
6 changes: 3 additions & 3 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,10 +13,10 @@


INSTALL_REQUIRES = [
"transformers == 4.35.0",
"transformers == 4.36.2",
"accelerate == 0.23.0",
"optimum >= 1.14.0",
"huggingface_hub >= 0.14.0",
"huggingface_hub >= 0.20.1",
"numpy>=1.22.2, <=1.25.2",
"protobuf<4",
]
Expand All @@ -29,7 +29,7 @@
"sentencepiece",
"datasets",
"sacremoses",
"diffusers >= 0.23.0",
"diffusers >= 0.25.0",
"safetensors",
]

Expand Down
12 changes: 2 additions & 10 deletions tests/inference/inference_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,15 +13,14 @@
# See the License for the specific language governing permissions and
# limitations under the License.

import os
import shutil
import tempfile
import unittest
from io import BytesIO
from typing import Dict

import huggingface_hub
import requests
from huggingface_hub import HfFolder
from PIL import Image
from transformers import set_seed

Expand Down Expand Up @@ -61,12 +60,7 @@ class NeuronModelIntegrationTestMixin(unittest.TestCase):

@classmethod
def setUpClass(cls):
if os.environ.get("HF_TOKEN_OPTIMUM_NEURON_CI", None) is not None:
token = os.environ.get("HF_TOKEN_OPTIMUM_NEURON_CI")
HfFolder.save_token(token)
else:
raise RuntimeError("Please specify the token via the HF_TOKEN_OPTIMUM_NEURON_CI environment variable.")
cls._token = HfFolder.get_token()
cls._token = huggingface_hub.get_token()

model_name = cls.MODEL_ID.split("/")[-1]
model_dir = tempfile.mkdtemp(prefix=f"{model_name}_")
Expand All @@ -82,8 +76,6 @@ def setUpClass(cls):

@classmethod
def tearDownClass(cls):
if cls._token is not None:
HfFolder.save_token(cls._token)
if cls.local_model_path is not None:
shutil.rmtree(cls.local_model_path)

Expand Down
6 changes: 2 additions & 4 deletions tests/test_examples.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
from typing import Any, Callable, Dict, List, Optional, Set, Tuple, TypeVar, Union
from unittest import TestCase

from huggingface_hub import HfFolder
import huggingface_hub
from transformers import (
CONFIG_MAPPING,
MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING,
Expand Down Expand Up @@ -56,9 +56,7 @@
TypeOrDictOfType = Union[T, Dict[str, T]]


TOKEN = HfFolder.get_token()
if os.environ.get("HF_TOKEN_OPTIMUM_NEURON_CI", None) is not None:
TOKEN = os.environ.get("HF_TOKEN_OPTIMUM_NEURON_CI")
TOKEN = huggingface_hub.get_token()

DEFAULT_CACHE_REPO = "optimum-internal-testing/optimum-neuron-cache-for-testing"
SAVED_CUSTOM_CACHE_REPO = load_custom_cache_repo_name_from_hf_home()
Expand Down
13 changes: 3 additions & 10 deletions tests/test_runner.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
import os
from unittest import TestCase

from huggingface_hub import HfFolder
import huggingface_hub
from parameterized import parameterized

from optimum.neuron.utils.cache_utils import (
Expand Down Expand Up @@ -58,21 +58,14 @@ class TestExampleRunner(TestCase):

@classmethod
def setUpClass(cls):
cls._token = HfFolder.get_token()
cls._token = huggingface_hub.get_token()
cls._cache_repo = load_custom_cache_repo_name_from_hf_home()
cls._env = dict(os.environ)
if os.environ.get("HF_TOKEN_OPTIMUM_NEURON_CI", None) is not None:
token = os.environ.get("HF_TOKEN_OPTIMUM_NEURON_CI")
HfFolder.save_token(token)
set_custom_cache_repo_name_in_hf_home(cls.CACHE_REPO_NAME)
else:
raise RuntimeError("Please specify the token via the HF_TOKEN_OPTIMUM_NEURON_CI environment variable.")
set_custom_cache_repo_name_in_hf_home(cls.CACHE_REPO_NAME)

@classmethod
def tearDownClass(cls):
os.environ = cls._env
if cls._token is not None:
HfFolder.save_token(cls._token)
if cls._cache_repo is not None:
try:
set_custom_cache_repo_name_in_hf_home(cls._cache_repo)
Expand Down
Loading