Skip to content

Commit

Permalink
Merge branch 'master' into kanvi/MLIR-ConvSqueezeBias
Browse files Browse the repository at this point in the history
  • Loading branch information
kanvi-nervana committed Mar 22, 2023
2 parents 85d23c9 + f7faa99 commit f5da6f3
Show file tree
Hide file tree
Showing 4,538 changed files with 160,375 additions and 112,418 deletions.
The diff you're trying to view is too large. We only load the first 3000 changed files.
15 changes: 3 additions & 12 deletions .bazelrc
Original file line number Diff line number Diff line change
Expand Up @@ -398,11 +398,6 @@ build:windows --verbose_failures
# See: https://github.com/bazelbuild/bazel/issues/5163
build:windows --features=compiler_param_file

# On windows, we never cross compile
build:windows --distinct_host_configuration=false
# On linux, don't cross compile by default
build:linux --distinct_host_configuration=false

# Do not risk cache corruption. See:
# https://github.com/bazelbuild/bazel/issues/3360
build:linux --experimental_guard_against_concurrent_changes
Expand Down Expand Up @@ -443,7 +438,6 @@ build:rbe --bes_backend=buildeventservice.googleapis.com
build:rbe --bes_results_url="https://source.cloud.google.com/results/invocations"
build:rbe --bes_timeout=600s
build:rbe --define=EXECUTOR=remote
build:rbe --distinct_host_configuration=false
build:rbe --flaky_test_attempts=3
build:rbe --jobs=800
build:rbe --remote_executor=grpcs://remotebuildexecution.googleapis.com
Expand Down Expand Up @@ -557,8 +551,8 @@ build:rbe_linux_py3_base --python_path="/usr/local/bin/python3.9"
build:rbe_linux_py3_base --repo_env=TF_PYTHON_CONFIG_REPO="@ubuntu20.04-gcc9_manylinux2014-cuda11.2-cudnn8.1-tensorrt7.2_config_python3.9"

build:rbe_win --config=rbe
build:rbe_win --crosstool_top="//tensorflow/tools/toolchains/win/tf_win_01232023:toolchain"
build:rbe_win --extra_toolchains="//tensorflow/tools/toolchains/win/tf_win_01232023:cc-toolchain-x64_windows"
build:rbe_win --crosstool_top="//tensorflow/tools/toolchains/win/tf_win_02232023:toolchain"
build:rbe_win --extra_toolchains="//tensorflow/tools/toolchains/win/tf_win_02232023:cc-toolchain-x64_windows"
build:rbe_win --extra_execution_platforms="//tensorflow/tools/toolchains/win:rbe_windows_ltsc2019"
build:rbe_win --host_platform="//tensorflow/tools/toolchains/win:rbe_windows_ltsc2019"
build:rbe_win --platforms="//tensorflow/tools/toolchains/win:rbe_windows_ltsc2019"
Expand Down Expand Up @@ -610,10 +604,8 @@ build:elinux --crosstool_top=@local_config_embedded_arm//:toolchain
build:elinux --host_crosstool_top=@bazel_tools//tools/cpp:toolchain
build:elinux_aarch64 --config=elinux
build:elinux_aarch64 --cpu=aarch64
build:elinux_aarch64 --distinct_host_configuration=true
build:elinux_armhf --config=elinux
build:elinux_armhf --cpu=armhf
build:elinux_armhf --distinct_host_configuration=true
build:elinux_armhf --copt -mfp16-format=ieee
# END TF REMOTE BUILD EXECUTION OPTIONS

Expand All @@ -627,7 +619,6 @@ try-import %workspace%/.bazelrc.user

# Here are bazelrc configs for release builds
build:release_base --config=v2
build:release_base --distinct_host_configuration=false
test:release_base --flaky_test_attempts=3
test:release_base --test_size_filters=small,medium

Expand Down Expand Up @@ -705,4 +696,4 @@ build:tf_fuzztest --action_env=CC=clang
build:tf_fuzztest --action_env=CXX=clang++
build:tf_fuzztest --spawn_strategy=sandboxed
build:tf_fuzztest --config=monolithic
build:tf_fuzztest --@libjpeg_turbo//:noasm=yes
build:tf_fuzztest --@libjpeg_turbo//:noasm=yes
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/tensorflow_issue_template.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
name: Tensorflow Issue Template
description: Use this template to report any issue
description: Please report security related issue using [Google Bug Hunters reporting form](https://bughunters.google.com/). To report any other TensorFlow related issue please use this template.
body:
- type: dropdown
id: issue-type
Expand Down
4 changes: 2 additions & 2 deletions .github/bot_config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@
assignees:
- synandi
- tiruk007
- gaikwadrahul8
- pjpratik
- tilakrayal
- sushreebarsa
# A list of assignees for compiler folder
compiler_assignees:
- joker-eph
Expand Down
39 changes: 0 additions & 39 deletions .github/stale.yml

This file was deleted.

64 changes: 64 additions & 0 deletions .github/workflows/stale-issues.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
# Copyright 2023 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================

name: Close inactive issues
on:
schedule:
- cron: "30 1 * * *"

jobs:
close-issues:
# Don't do this in forks
if: github.repository == 'tensorflow/tensorflow'
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- name: Awaiting response issues
uses: actions/stale@v5
with:
days-before-issue-stale: 7
days-before-issue-close: 7
stale-issue-label: "stale"
# reason for closed the issue default value is not_planned
close-issue-reason: completed
only-labels: "stat:awaiting response"
stale-issue-message: >
This issue is stale because it has been open for 7 days with no activity.
It will be closed if no further activity occurs. Thank you.
close-issue-message: >
This issue was closed because it has been inactive for 7 days since being marked as stale.
Please reopen if you'd like to work on this further.
days-before-pr-stale: 14
days-before-pr-close: 14
stale-pr-message: "This PR is stale because it has been open for 14 days with no activity. It will be closed if no further activity occurs. Thank you."
close-pr-message: "This PR was closed because it has been inactive for 14 days since being marked as stale. Please reopen if you'd like to work on this further."
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: Contribution issues
uses: actions/stale@v5
with:
days-before-issue-stale: 180
days-before-issue-close: 365
stale-issue-label: "stale"
# reason for closed the issue default value is not_planned
close-issue-reason: completed
any-of-labels: "stat:contribution welcome,stat:good first issue"
stale-issue-message: >
This issue is stale because it has been open for 180 days with no activity.
It will be closed if no further activity occurs. Thank you.
close-issue-message: >
This issue was closed because it has been inactive for 1 year.
repo-token: ${{ secrets.GITHUB_TOKEN }}
3 changes: 3 additions & 0 deletions .github/workflows/trusted-partners.yml
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,9 @@ jobs:
case "linaro.org":
console.log(await script.filter({github, context, domain}));
break;
case "arm.com":
console.log(await script.filter({github, context, domain}));
break;
case "google.com":
console.log("Googler. No action necessary");
break;
Expand Down
4 changes: 4 additions & 0 deletions .github/workflows/trusted_partners.js
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,10 @@ const filter_action = async ({github, context, domain}) => {
assignees.push('nitins17', 'penpornk');
}
}
if (lowercased_title.includes('tf-mot') && lowercased_title.includes('arm') &&
domain.includes('arm.com')) {
assignees.push('rino20', 'yyoon', 'lenscloth');
}

const resp_label = await github.rest.issues.addLabels({
issue_number: context.issue.number,
Expand Down
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
[![OpenSSF Scorecard](https://api.securityscorecards.dev/projects/github.com/tensorflow/tensorflow/badge)](https://api.securityscorecards.dev/projects/github.com/tensorflow/tensorflow)
[![Fuzzing Status](https://oss-fuzz-build-logs.storage.googleapis.com/badges/tensorflow.svg)](https://bugs.chromium.org/p/oss-fuzz/issues/list?sort=-opened&can=1&q=proj:tensorflow)
[![Fuzzing Status](https://oss-fuzz-build-logs.storage.googleapis.com/badges/tensorflow-py.svg)](https://bugs.chromium.org/p/oss-fuzz/issues/list?sort=-opened&can=1&q=proj:tensorflow-py)
[![OSSRank](https://shields.io/endpoint?url=https://ossrank.com/shield/44)](https://ossrank.com/p/44)
[![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-v1.4%20adopted-ff69b4.svg)](CODE_OF_CONDUCT.md)

**`Documentation`** |
Expand Down
163 changes: 160 additions & 3 deletions RELEASE.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,152 @@
# Release 2.13.0

## Breaking Changes

* <DOCUMENT BREAKING CHANGES HERE>
* <THIS SECTION SHOULD CONTAIN API, ABI AND BEHAVIORAL BREAKING CHANGES>

* `tf.keras`

* Removed the Keras scikit-learn API wrappers (`KerasClassifier` and
`KerasRegressor`), which had been deprecated in August 2021.
We recommend using [SciKeras](https://github.com/adriangb/scikeras)
instead.
* The default Keras model saving format is now the Keras v3 format:
calling `model.save("xyz.keras")` will no longer create a H5 file,
it will create a native Keras model file.
This will only be breaking for you if you were manually inspecting or
modifying H5 files saved by Keras under a `.keras` extension.
If this breaks you, simply add `save_format="h5"` to your `.save()` call
to revert back to the prior behavior.

* The LMDB kernels have been changed to return an error. This is in preparation
for completely removing them from TensorFlow. The LMDB dependency that these
kernels are bringing to TensorFlow has been dropped, thus making the build
slightly faster and more secure.

## Known Caveats

* <CAVEATS REGARDING THE RELEASE (BUT NOT BREAKING CHANGES).>
* <ADDING/BUMPING DEPENDENCIES SHOULD GO HERE>
* <KNOWN LACK OF SUPPORT ON SOME PLATFORM, SHOULD GO HERE>

## Major Features and Improvements

* `tf.lite`:

* Add 16-bit and 64-bit float type support for built-in op `cast`.
* The Python TF Lite Interpreter bindings now have an option
`experimental_disable_delegate_clustering` to turn-off delegate
clustering.
* Add int16x8 support for the built-in op `exp`
* Add int16x8 support for the built-in op `mirror_pad`
* Add 16-bit int type support for built-in op `less`, `greater_than`,
`equal`
* Add 8-bit and 16-bit support for `floor_div` and `floor_mod`.
* Add int16 indices support for built-in op `gather` and `gather_nd`.
* Add reference implementation for 16-bit int unquantized `add`.

* `tf.keras`

* Added F-Score metrics `tf.keras.metrics.FBetaScore`,
`tf.keras.metrics.F1Score`, and `tf.keras.metrics.R2Score`.
* Added activation function `tf.keras.activations.mish`.
* Added experimental `keras.metrics.experimental.PyMetric` API for metrics
that run Python code on the host CPU (compiled outside of the TensorFlow
graph). This can be used for integrating metrics from external Python
libraries (like sklearn or pycocotools) into Keras as first-class Keras
metrics.
* Added `tf.keras.optimizers.Lion` optimizer.
* The `SidecarEvaluatorModelExport` callback has been added to Keras as
`keras.callbacks.SidecarEvaluatorModelExport`. This callback allows for
exporting the model the best-scoring model as evaluated by a
`SidecarEvaluator` evaluator. The evaluator regularly evaluates the
model and exports it if the user-defined comparison function determines
that it is an improvement.
* Added warmup capabilities to `tf.keras.optimizers.schedules.CosineDecay`
learning rate scheduler. You can now specify an initial and target
learning rate, and our scheduler will perform a linear interpolation
between the two after which it will begin a decay phase.
* Added experimental support for an exactly-once visitation guarantee for
evaluating Keras models trained with
`tf.distribute.ParameterServerStrategy`, via the
`exact_evaluation_shards` argument in `Model.fit` and `Model.evaluate`.
* Added `tf.keras.__internal__.KerasTensor`,
`tf.keras.__internal__.SparseKerasTensor`, and
`tf.keras.__internal__.RaggedKerasTensor` classes. You can use these
classes to do instance type checking and type annotations for
layer/model inputs and outputs.
* All the `tf.keras.dtensor.experimental.optimizers` classes have been
merged with `tf.keras.optimizers`. You can migrate your code to use
`tf.keras.optimizers` directly. The API namespace for
`tf.keras.dtensor.experimental.optimizers` will be removed in future
releases.

* `tf.function`:

* ConcreteFunction (`tf.types.experimental.ConcreteFunction`) as generated
through `get_concrete_function` now performs holistic input validation
similar to calling `tf.function` directly. This can cause breakages
where existing calls pass Tensors with the wrong shape or omit certain
non-Tensor arguments (including default values).

* `tf.nn`

* `tf.nn.embedding_lookup_sparse` and `tf.nn.safe_embedding_lookup_sparse`
now support ids and weights described by `tf.RaggedTensor`s.
* Added a new boolean argument `allow_fast_lookup` to
`tf.nn.embedding_lookup_sparse` and
`tf.nn.safe_embedding_lookup_sparse`, which enables a simplified and
typically faster lookup procedure.

* `tf.data`

* `tf.data.Dataset.zip` now supports Python-style zipping, i.e.
`Dataset.zip(a, b, c)`.

* `tf.SavedModel`

* Introduce class method
`tf.saved_model.experimental.Fingerprint.from_proto(proto)`, which can
be used to construct a `Fingerprint` object directly from a protobuf.
* Introduce member method
`tf.saved_model.experimental.Fingerprint.singleprint()`, which provides
a convenient way to uniquely identify a SavedModel.

## Bug Fixes and Other Changes

* <SIMILAR TO ABOVE SECTION, BUT FOR OTHER IMPORTANT CHANGES / BUG FIXES>
* <IF A CHANGE CLOSES A GITHUB ISSUE, IT SHOULD BE DOCUMENTED HERE>
* <NOTES SHOULD BE GROUPED PER AREA>

* `tf.distribute`

* Opened an experimental API,
`tf.distribute.experimental.coordinator.get_current_worker_index`, for
retrieving the worker index from within a worker, when using parameter
server training with a custom training loop.

* `tf.experimental.dtensor`:

* Deprecated `dtensor.run_on` in favor of `dtensor.default_mesh` to
correctly indicate that the context does not override the mesh that the
ops and functions will run on, it only sets a fallback default mesh.
* List of members of dtensor.Layout and dtensor.Mesh have slightly changed
as part of efforts to consolidate the C++ and Python source
code with pybind11. Most notably, Layout.serialized_string is removed.

* `tf.experimental.ExtensionType`:

* `tf.experimental.ExtensionType` now supports Python `tuple` as
the type annotation of its fields.

## Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

<INSERT>, <NAME>, <HERE>, <USING>, <GITHUB>, <HANDLE>


# Release 2.12.0

# Breaking Changes
Expand Down Expand Up @@ -159,7 +308,15 @@
`rerandomize_each_iteration=True`, the `sample_from_datasets()`
operation will use a different (deterministic) sequence of numbers every
epoch.

* Added a new field, `warm_start`, to
`tf.data.experimental.OptimizationOptions`. If it is set to `True`,
tf.data will start background threads of asynchronous
transformations upon iterator creation (as opposed to upon first call
to `GetNext`). To enable this behavior, set `warm_start=True` in
`tf.data.experimental.OptimizationOptions`. It should be noted that this
possibly improves the latency of the initial 'GetNext' call at the
expense of requiring more memory to hold prefetched elements between
the time of iterator construction and usage.
* `tf.test`:

* Added `tf.test.experimental.sync_devices`, which is useful for
Expand Down Expand Up @@ -1574,7 +1731,7 @@ This releases introduces several vulnerability fixes:
([CVE-2022-23572](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23572))
* Fixes a heap OOB read/write in `SpecializeType`
([CVE-2022-23574](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23574))
* Fixes an unitialized variable access in `AssignOp`
* Fixes an uninitialized variable access in `AssignOp`
([CVE-2022-23573](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23573))
* Fixes an integer overflow in `OpLevelCostEstimator::CalculateTensorSize`
([CVE-2022-23575](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23575))
Expand Down Expand Up @@ -1852,7 +2009,7 @@ This releases introduces several vulnerability fixes:
([CVE-2022-23572](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23572))
* Fixes a heap OOB read/write in `SpecializeType`
([CVE-2022-23574](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23574))
* Fixes an unitialized variable access in `AssignOp`
* Fixes an uninitialized variable access in `AssignOp`
([CVE-2022-23573](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23573))
* Fixes an integer overflow in `OpLevelCostEstimator::CalculateTensorSize`
([CVE-2022-23575](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23575))
Expand Down
Loading

0 comments on commit f5da6f3

Please sign in to comment.