Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Tensorflow, gRPC, Protobuf dependencies #868

Merged
merged 38 commits into from
Aug 25, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
1b63301
Update Tensorflow to latest, finally update grpcio/protobuf
psfoley Aug 1, 2023
f345338
Lint issue fix and missing tf reference
psfoley Aug 1, 2023
96d598f
pyzmq version fixed
mansishr Aug 21, 2023
c01668d
fix taskrunner tests for windows
Aug 21, 2023
baff1a0
fix taskrunner test syntax for windows
Aug 21, 2023
a253d85
adding user option to workspace pip install requirements for windows
Aug 22, 2023
e94697a
fix windows CI test
Aug 22, 2023
fba30ff
testing virtual env for windows github actions
Aug 22, 2023
680d464
testing virtual env for windows github actions
Aug 22, 2023
e887ba1
testing virtual env for windows github actions
Aug 22, 2023
c46f78c
testing venv for windows
Aug 23, 2023
e99ad55
test venv for windows
mansishr Aug 23, 2023
f62cecd
test venv for windows
mansishr Aug 23, 2023
ac3087b
Added new KerasSerializer. Fixed other Interactive API experiments
psfoley Aug 24, 2023
c705aaa
Merge branch 'tf-1.13' of https://github.com/psfoley/openfl into tf-1.13
psfoley Aug 24, 2023
1e45246
Update taskrunner.yml
psfoley Aug 24, 2023
027e9c2
Update taskrunner.yml
psfoley Aug 24, 2023
2df3b51
Update workspace.py
psfoley Aug 24, 2023
45fe7f7
Update workspace.py
psfoley Aug 24, 2023
1c52252
Update taskrunner.yml
psfoley Aug 24, 2023
dbae5a7
Remove get_model import from global namespace so dependencies are not…
psfoley Aug 24, 2023
8786895
Refactoring and cleaning up imports to support Windows install
psfoley Aug 24, 2023
ffc4310
Fixed logger import paths
psfoley Aug 24, 2023
9a7f5e8
Fix missing imports
psfoley Aug 24, 2023
35f625a
Fix native import
psfoley Aug 24, 2023
9a6e8e0
Fix lint errors
psfoley Aug 24, 2023
1d93aeb
Fix keras optimizer patch. Remove irrelevant unit test
psfoley Aug 24, 2023
178f44e
Format logs in UTF-8 for windows
psfoley Aug 24, 2023
293ada9
Update interactive-kvasir.yml
psfoley Aug 24, 2023
ae1b6e1
Consolidate github actions python versions to single file
psfoley Aug 24, 2023
2ac5d55
Update python versions
psfoley Aug 24, 2023
84dd777
Update python versions
psfoley Aug 24, 2023
edd6b2f
Update python versions
psfoley Aug 25, 2023
5d4de2a
Reduce # of DataLoader workers for Pytorch Kvasir CI test
psfoley Aug 25, 2023
18fb737
Fix Windows encoding
psfoley Aug 25, 2023
acfa1e1
Fix Windows encoding and limit rounds so Github Actions CI doesn't ru…
psfoley Aug 25, 2023
bfae178
Fix windows encoding
psfoley Aug 25, 2023
ae69055
Fix Windows encoding
psfoley Aug 25, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/interactive-tensorflow.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,5 +27,5 @@ jobs:
- name: Interactive API - tensorflow_mnist
run: |
python setup.py build_grpc
pip install tensorflow==2.11
pip install tensorflow==2.13
python -m tests.github.interactive_api_director.experiments.tensorflow_mnist.run
17 changes: 11 additions & 6 deletions .github/workflows/taskrunner.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,19 +12,24 @@ permissions:

jobs:
build:

strategy:
matrix:
os: ['ubuntu-latest', 'windows-latest']
python-version: ['3.8','3.9','3.10','3.11']
runs-on: ${{ matrix.os }}

steps:
- uses: actions/checkout@v3
- name: Set up Python 3.8
uses: actions/setup-python@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.8"
- name: Install dependencies
python-version: ${{ matrix.python-version }}
- name: Install dependencies ubuntu
if: matrix.os == 'ubuntu-latest'
run: |
python -m pip install --upgrade pip
pip install .
- name: Install dependencies windows
if: matrix.os == 'windows-latest'
run: |
python -m pip install --upgrade pip
pip install .
Expand Down
33 changes: 0 additions & 33 deletions .github/workflows/taskrunner_python_3.10.yml

This file was deleted.

33 changes: 0 additions & 33 deletions .github/workflows/taskrunner_python_3.9.yml

This file was deleted.

4 changes: 2 additions & 2 deletions openfl-tutorials/Federated_Keras_MNIST_Tutorial.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,10 @@
"outputs": [],
"source": [
"#Install Tensorflow and MNIST dataset if not installed\n",
"!pip install tensorflow==2.7.0\n",
"!pip install tensorflow==2.13\n",
"\n",
"#Alternatively you could use the intel-tensorflow build\n",
"# !pip install intel-tensorflow==2.3.0"
"# !pip install intel-tensorflow==2.13"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
tensorflow==2.11.1
tensorflow==2.13
tensorflow-datasets==4.6.0
jax
--find-links https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,8 @@
"metadata": {},
"outputs": [],
"source": [
"# Install TF if not already. We recommend TF2.7 or greater.\n",
"# !pip install tensorflow==2.8"
"# Install TF if not already. We recommend TF2.13 or greater.\n",
"# !pip install tensorflow==2.13"
]
},
{
Expand Down Expand Up @@ -157,7 +157,7 @@
"model.summary()\n",
"\n",
"# Define optimizer\n",
"optimizer = tf.optimizers.Adam(learning_rate=1e-4)\n",
"optimizer = tf.keras.optimizers.legacy.Adam(learning_rate=1e-4)\n",
"\n",
"# Loss and metrics. These will be used later.\n",
"loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)\n",
Expand Down Expand Up @@ -327,7 +327,7 @@
"source": [
"# create an experimnet in federation\n",
"experiment_name = 'cifar10_experiment'\n",
"fl_experiment = FLExperiment(federation=federation, experiment_name=experiment_name)"
"fl_experiment = FLExperiment(federation=federation, experiment_name=experiment_name,serializer_plugin='openfl.plugins.interface_serializer.keras_serializer.KerasSerializer)"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -404,7 +404,7 @@
"source": [
"# create an experimnet in federation\n",
"experiment_name = 'mnist_experiment'\n",
"fl_experiment = FLExperiment(federation=federation, experiment_name=experiment_name)"
"fl_experiment = FLExperiment(federation=federation, experiment_name=experiment_name,serializer_plugin='openfl.plugins.interface_serializer.keras_serializer.KerasSerializer)"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -233,7 +233,7 @@
"source": [
"import tensorflow as tf\n",
"from tensorflow.keras.layers import LSTM, Dense\n",
"from tensorflow.keras.optimizers import Adam\n",
"from tensorflow.keras.legacy.optimizers import Adam\n",
"from tensorflow.keras.metrics import TopKCategoricalAccuracy\n",
"from tensorflow.keras.losses import CategoricalCrossentropy\n",
"from tensorflow.keras.models import Sequential\n",
Expand Down Expand Up @@ -363,7 +363,7 @@
"source": [
"# create an experimnet in federation\n",
"experiment_name = 'word_prediction_test_experiment'\n",
"fl_experiment = FLExperiment(federation=federation, experiment_name=experiment_name)"
"fl_experiment = FLExperiment(federation=federation, experiment_name=experiment_name,serializer_plugin='openfl.plugins.interface_serializer.keras_serializer.KerasSerializer)"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
tensorflow==2.11.1
tensorflow==2.13
numpy==1.22.2
2 changes: 1 addition & 1 deletion openfl-workspace/keras_cnn_mnist/requirements.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
tensorflow==2.11.1
tensorflow==2.13
Original file line number Diff line number Diff line change
@@ -1 +1 @@
tensorflow==2.11.1
tensorflow==2.13
2 changes: 1 addition & 1 deletion openfl-workspace/keras_nlp/requirements.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
tensorflow==2.11.1
tensorflow==2.13
2 changes: 1 addition & 1 deletion openfl-workspace/keras_nlp_gramine_ready/requirements.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
tensorflow-cpu==2.11.1
tensorflow-cpu==2.13
2 changes: 1 addition & 1 deletion openfl-workspace/tf_2dunet/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
nibabel
tensorflow==2.11.1
tensorflow==2.13
setuptools>=65.5.1 # not directly required, pinned by Snyk to avoid a vulnerability
2 changes: 1 addition & 1 deletion openfl-workspace/tf_cnn_histology/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
pillow
tensorflow==2.11.1
tensorflow==2.13
tensorflow-datasets
2 changes: 1 addition & 1 deletion openfl/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,4 @@
"""openfl base package."""
from .__version__ import __version__
# flake8: noqa
from .interface.model import get_model
#from .interface.model import get_model
4 changes: 2 additions & 2 deletions openfl/experimental/utilities/runtime_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ def filter_attributes(ctx, f, **kwargs):
if "include" in kwargs and "exclude" in kwargs:
raise RuntimeError("'include' and 'exclude' should not both be present")
elif "include" in kwargs:
assert type(kwargs["include"]) == list
assert type(kwargs["include"]) is list
for in_attr in kwargs["include"]:
if in_attr not in cls_attrs:
raise RuntimeError(
Expand All @@ -59,7 +59,7 @@ def filter_attributes(ctx, f, **kwargs):
if attr not in kwargs["include"]:
delattr(ctx, attr)
elif "exclude" in kwargs:
assert type(kwargs["exclude"]) == list
assert type(kwargs["exclude"]) is list
for in_attr in kwargs["exclude"]:
if in_attr not in cls_attrs:
raise RuntimeError(
Expand Down
4 changes: 3 additions & 1 deletion openfl/federated/plan/plan.py
Original file line number Diff line number Diff line change
Expand Up @@ -274,7 +274,9 @@ def get_assigner(self):
aggregation_functions_by_task = self.restore_object('aggregation_function_obj.pkl')
assigner_function = self.restore_object('task_assigner_obj.pkl')
except Exception as exc:
self.logger.error(f'Failed to load aggregation and assigner functions: {exc}')
self.logger.error(
f'Failed to load aggregation and assigner functions: {exc}'
)
self.logger.info('Using Task Runner API workflow')
if assigner_function:
self.assigner_ = Assigner(
Expand Down
2 changes: 1 addition & 1 deletion openfl/federated/task/runner_gandlf.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
from typing import Union
import yaml

from openfl.utilities import split_tensor_dict_for_holdouts
from openfl.utilities.split import split_tensor_dict_for_holdouts
from openfl.utilities import TensorKey

from .runner import TaskRunner
Expand Down
2 changes: 1 addition & 1 deletion openfl/federated/task/runner_keras.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@

from openfl.utilities import change_tags
from openfl.utilities import Metric
from openfl.utilities import split_tensor_dict_for_holdouts
from openfl.utilities.split import split_tensor_dict_for_holdouts
from openfl.utilities import TensorKey
from .runner import TaskRunner

Expand Down
2 changes: 1 addition & 1 deletion openfl/federated/task/runner_pt.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@

from openfl.utilities import change_tags
from openfl.utilities import Metric
from openfl.utilities import split_tensor_dict_for_holdouts
from openfl.utilities.split import split_tensor_dict_for_holdouts
from openfl.utilities import TensorKey
from .runner import TaskRunner

Expand Down
2 changes: 1 addition & 1 deletion openfl/federated/task/runner_tf.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
import tensorflow.compat.v1 as tf
from tqdm import tqdm

from openfl.utilities import split_tensor_dict_for_holdouts
from openfl.utilities.split import split_tensor_dict_for_holdouts
from openfl.utilities import TensorKey
from .runner import TaskRunner

Expand Down
2 changes: 1 addition & 1 deletion openfl/federated/task/task_runner.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
import numpy as np

from openfl.utilities import change_tags
from openfl.utilities import split_tensor_dict_for_holdouts
from openfl.utilities.split import split_tensor_dict_for_holdouts
from openfl.utilities import TensorKey


Expand Down
3 changes: 1 addition & 2 deletions openfl/interface/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,8 @@
from click import pass_context
from click import style
import time

from openfl.utilities import add_log_level
import sys
from openfl.utilities import add_log_level


def setup_logging(level='info', log_file=None):
Expand Down
6 changes: 4 additions & 2 deletions openfl/interface/director.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,7 @@
from click import Path as ClickPath
from dynaconf import Validator

from openfl.component.director import Director
from openfl.interface.cli_helper import WORKSPACE
from openfl.transport import DirectorGRPCServer
from openfl.utilities import merge_configs
from openfl.utilities.path_check import is_directory_traversal
from openfl.interface.cli import review_plan_callback
Expand Down Expand Up @@ -47,6 +45,10 @@ def director(context):
help='Path to a signed certificate')
def start(director_config_path, tls, root_certificate, private_key, certificate):
"""Start the director service."""

from openfl.component.director import Director
from openfl.transport import DirectorGRPCServer

director_config_path = Path(director_config_path).absolute()
logger.info('🧿 Starting the Director Service.')
if is_directory_traversal(director_config_path):
Expand Down
4 changes: 3 additions & 1 deletion openfl/interface/envoy.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@
from click import Path as ClickPath
from dynaconf import Validator

from openfl.component.envoy.envoy import Envoy
from openfl.interface.cli import review_plan_callback
from openfl.interface.cli_helper import WORKSPACE
from openfl.utilities import click_types
Expand Down Expand Up @@ -52,6 +51,9 @@ def envoy(context):
def start_(shard_name, director_host, director_port, tls, envoy_config_path,
root_certificate, private_key, certificate):
"""Start the Envoy."""

from openfl.component.envoy.envoy import Envoy

logger.info('🧿 Starting the Envoy.')
if is_directory_traversal(envoy_config_path):
click.echo('The shard config path is out of the openfl workspace scope.')
Expand Down
9 changes: 6 additions & 3 deletions openfl/interface/interactive_api/experiment.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
from openfl.interface.cli import setup_logging
from openfl.interface.cli_helper import WORKSPACE
from openfl.native import update_plan
from openfl.utilities import split_tensor_dict_for_holdouts
from openfl.utilities.split import split_tensor_dict_for_holdouts
from openfl.utilities.workspace import dump_requirements_file


Expand Down Expand Up @@ -81,7 +81,9 @@ def _initialize_plan(self):
def _assert_experiment_submitted(self):
"""Assure experiment is sent to director and accepted."""
if not self.experiment_submitted:
self.logger.error('The experiment was not submitted to a Director service.')
self.logger.error(
'The experiment was not submitted to a Director service.'
)
self.logger.error(
'Report the experiment first: '
'use the Experiment.start() method.')
Expand Down Expand Up @@ -145,7 +147,8 @@ def stream_metrics(self, tensorboard_logs: bool = True) -> None:
f'Round {metric_message_dict["round"]}, '
f'collaborator {metric_message_dict["metric_origin"]} '
f'{metric_message_dict["task_name"]} result '
f'{metric_message_dict["metric_name"]}:\t{metric_message_dict["metric_value"]:f}')
f'{metric_message_dict["metric_name"]}:\t{metric_message_dict["metric_value"]:f}'
)

if tensorboard_logs:
self.write_tensorboard_metric(metric_message_dict)
Expand Down
Loading