Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[torchlib] Implement upsample_nearest{nd}.vec #1874

Merged
merged 11 commits into from
Sep 24, 2024
Merged

Conversation

justinchuby
Copy link
Collaborator

No description provided.

@justinchuby justinchuby added the topic: torch_lib Related to the torch/aten function lib in development label Sep 23, 2024
Copy link

codecov bot commented Sep 23, 2024

❌ 1 Tests Failed:

Tests completed Failed Passed Skipped
12200 1 12199 2300
View the full list of 1 ❄️ flaky tests
onnxscript.tools.transformers_models.phi_test.TestExportPhi test_phi_dort_static

Flake rate in main: 100.00% (Passed 0 times, Failed 579 times)

Stack Traces | 13.2s run time
onnxscript/_internal/version_utils.py:114: in call_f
    return fct(self)
.../tools/transformers_models/phi_test.py:105: in test_phi_dort_static
    gradients = onnxscript.tools.training_helper.train_loop(compiled_model, *input_tensors)
onnxscript/tools/training_helper.py:42: in train_loop
    loss.backward()
..../test_torch_nightly/lib/python3.11.../site-packages/torch/_tensor.py:581: in backward
    torch.autograd.backward(
..../test_torch_nightly/lib/python3.11.../torch/autograd/__init__.py:347: in backward
    _engine_run_backward(
..../test_torch_nightly/lib/python3.11.../torch/autograd/graph.py:825: in _engine_run_backward
    return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
..../test_torch_nightly/lib/python3.11.../torch/autograd/function.py:307: in apply
    return user_fn(self, *args)
..../test_torch_nightly/lib/python3.11.../_functorch/_aot_autograd/runtime_wrappers.py:2048: in backward
    out = call_compiled_backward()
..../test_torch_nightly/lib/python3.11.../_functorch/_aot_autograd/runtime_wrappers.py:1980: in call_compiled_backward
    out = call_func_at_runtime_with_args(
..../test_torch_nightly/lib/python3.11.../_functorch/_aot_autograd/utils.py:133: in call_func_at_runtime_with_args
    out = normalize_as_list(f(*args))
..../test_torch_nightly/lib/python3.11.../nn/modules/module.py:1736: in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
..../test_torch_nightly/lib/python3.11.../nn/modules/module.py:1747: in _call_impl
    return forward_call(*args, **kwargs)
..../test_torch_nightly/lib/python3.11.../torch/_dynamo/eval_frame.py:632: in _fn
    return fn(*args, **kwargs)
..../test_torch_nightly/lib/python3.11.../torch/fx/graph_module.py:784: in call_wrapped
    return self._wrapped_call(self, *args, **kwargs)
..../test_torch_nightly/lib/python3.11.../torch/fx/graph_module.py:361: in __call__
    raise e
..../test_torch_nightly/lib/python3.11.../torch/fx/graph_module.py:348: in __call__
    return super(self.cls, obj).__call__(*args, **kwargs)  # type: ignore[misc]
..../test_torch_nightly/lib/python3.11.../nn/modules/module.py:1736: in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
..../test_torch_nightly/lib/python3.11.../nn/modules/module.py:1747: in _call_impl
    return forward_call(*args, **kwargs)
<eval_with_key>.41:5: in forward
    fused_0 = self.fused_0(tangents_1, primals_36, add_14, getitem_7, getitem_8, addmm_10, tanh_1, add_7, getitem_4, getitem_5, addmm_4, tanh, clone, getitem_1, getitem_2, primals_1, t_12, t_20, view_43, view_39, transpose_11, transpose_12, detach_11, t_16, view_23, transpose_13, transpose_14, unsqueeze_12, unsqueeze_11, t_24, t_32, t_28, primals_20, t_36, t_44, view_21, view_17, transpose_20, transpose_21, detach_15, t_40, view_1, transpose_22, transpose_23, unsqueeze_8, unsqueeze_7, t_48, t_56, t_52, primals_4);  tangents_1 = primals_36 = add_14 = getitem_7 = getitem_8 = addmm_10 = tanh_1 = add_7 = getitem_4 = getitem_5 = addmm_4 = tanh = clone = getitem_1 = getitem_2 = primals_1 = t_12 = t_20 = view_43 = view_39 = transpose_11 = transpose_12 = detach_11 = t_16 = view_23 = transpose_13 = transpose_14 = unsqueeze_12 = unsqueeze_11 = t_24 = t_32 = t_28 = primals_20 = t_36 = t_44 = view_21 = view_17 = transpose_20 = transpose_21 = detach_15 = t_40 = view_1 = transpose_22 = transpose_23 = unsqueeze_8 = unsqueeze_7 = t_48 = t_56 = t_52 = primals_4 = None
..../test_torch_nightly/lib/python3.11.../torch/fx/graph_module.py:784: in call_wrapped
    return self._wrapped_call(self, *args, **kwargs)
..../test_torch_nightly/lib/python3.11.../onnx/_internal/onnxruntime.py:1017: in _ort_acclerated_call
    onnx_session = onnxruntime.InferenceSession(
..../test_torch_nightly/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:419: in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
..../test_torch_nightly/lib/python3.11.../onnxruntime/capi/onnxruntime_inference_collection.py:474: in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
E   onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Node (aten_rsub_505) Op (aten_rsub|folded_0) [ShapeInferenceError] (op_type:Sub, node name: n3): B has inconsistent type tensor(float)

To view individual test run time comparison to the main branch, go to the Test Analytics Dashboard

@justinchuby justinchuby changed the title [torchlib] Implement upsample_nearest3d.vec [torchlib] Implement upsample_nearest{nd}.vec Sep 24, 2024
@justinchuby justinchuby enabled auto-merge (squash) September 24, 2024 16:32
@justinchuby justinchuby merged commit 99ae64e into main Sep 24, 2024
32 of 41 checks passed
@justinchuby justinchuby deleted the justinchu/upsample-vec branch September 24, 2024 16:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic: torch_lib Related to the torch/aten function lib in development
Projects
Development

Successfully merging this pull request may close these issues.

2 participants