-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(loss-functions): add poisson_nll_loss function #22501
Conversation
Thanks for contributing to Ivy! 😊👏 |
This below is one of the failing test for my implementation: ______________ test_poisson_nll_loss[cpu-tensorflow-False-False] _______________
ivy_tests/test_ivy/test_functional/test_experimental/test_nn/test_losses.py:158: in test_poisson_nll_loss
fn_tree="functional.ivy.experimental.poisson_nll_loss",
ivy_tests/test_ivy/test_functional/test_experimental/test_nn/test_losses.py:187: in test_poisson_nll_loss
helpers.test_function(
ivy_tests/test_ivy/helpers/function_testing.py:459: in test_function
gradient_test(
ivy_tests/test_ivy/helpers/function_testing.py:1015: in gradient_test
value_test(
ivy_tests/test_ivy/helpers/assertions.py:169: in value_test
assert_all_close(
ivy_tests/test_ivy/helpers/assertions.py:62: in assert_all_close
assert np.allclose(
E AssertionError: the results from backend tensorflow and ground truth framework torch do not match
E [0.]!=[-0.69314719]
E
E
E Falsifying example: test_poisson_nll_loss(
E backend_fw='tensorflow',
E on_device='cpu',
E dtype_input_target=(['float64', 'float64'], [array([2.]), array([2.])]),
E log_input=False,
E full=False,
E epsilon=1e-08,
E reduction='none',
E fn_name='poisson_nll_loss',
E test_flags=FunctionTestFlags(
E ground_truth_backend='torch',
E num_positional_args=0,
E with_out=False,
E instance_method=False,
E test_gradients=True,
E test_compile=False,
E as_variable=[False],
E native_arrays=[False],
E container=[False],
E ),
E ) But when I run it a temp. python file in-order to check it: import torch
import ivy
ivy.set_backend("tensorflow")
x = ivy.array([2.], dtype=ivy.float64)
y = ivy.array([2.], dtype=ivy.float64)
z = ivy.poisson_nll_loss(x, y, log_input=False, reduction="none")
print(z)
print(z.dtype)
print("TORCH OUTPUT")
x = torch.tensor([2.], dtype=torch.float64)
y = torch.tensor([2.], dtype=torch.float64)
z = torch.nn.functional.poisson_nll_loss(x, y, log_input=False, reduction="none")
print(z)
print(z.dtype)
"""
ivy.array([0.61370563])
float64
TORCH OUTPUT
tensor([0.6137], dtype=torch.float64)
torch.float64
""" |
same with this test failure for E AssertionError: the results from backend tensorflow and ground truth framework torch do not match
E [0. 0. 0. 0. 0. 0. 0. 0. 0.]!=[-0.07701635 -0.07701635 -0.07701635 -0.07701635 -0.07701635 -0.07701635
E -0.07701635 -0.07701635 -0.07701635]
E
E
E Falsifying example: test_poisson_nll_loss(
E backend_fw='tensorflow',
E on_device='cpu',
E dtype_input_target=(['float32', 'float32'],
E [array([2., 2., 2., 2., 2., 2., 2., 2., 2.], dtype=float32),
E array([2., 2., 2., 2., 2., 2., 2., 2., 2.], dtype=float32)]),
E log_input=False,
E full=False,
E epsilon=1e-08,
E reduction='none',
E fn_name='poisson_nll_loss',
E test_flags=FunctionTestFlags(
E ground_truth_backend='torch',
E num_positional_args=0,
E with_out=False,
E instance_method=False,
E test_gradients=True,
E test_compile=False,
E as_variable=[False],
E native_arrays=[False],
E container=[False],
E ),
E )
E
E You can reproduce this example by temporarily adding @reproduce_failure('6.82.6', b'AXicY2Bk4GBAAKahz2QEYgAXHQAv') as a decorator on your test case
ivy_tests/test_ivy/helpers/assertions.py:62: AssertionError But when I call my implementation then it gives different result: import torch
import ivy
ivy.set_backend("tensorflow")
x = ivy.array([2., 2., 2., 2., 2., 2., 2., 2., 2.], dtype=ivy.float32)
y = ivy.array([2., 2., 2., 2., 2., 2., 2., 2., 2.], dtype=ivy.float32)
z = ivy.poisson_nll_loss(x, y, log_input=False, reduction="none")
print(z)
print(z.dtype)
x = torch.tensor([2., 2., 2., 2., 2., 2., 2., 2., 2.], dtype=torch.float32)
y = torch.tensor([2., 2., 2., 2., 2., 2., 2., 2., 2.], dtype=torch.float32)
z = torch.nn.functional.poisson_nll_loss(x, y, log_input=False, reduction="none")
print(z)
print(z.dtype)
"""
ivy.array([0.61370564, 0.61370564, 0.61370564, 0.61370564, 0.61370564,
0.61370564, 0.61370564, 0.61370564, 0.61370564])
float32
tensor([0.6137, 0.6137, 0.6137, 0.6137, 0.6137, 0.6137, 0.6137, 0.6137, 0.6137])
torch.float32
"'" |
…le issue Refined the `poisson_nll_loss` composition function to address discrepancies with the native PaddlePaddle method. This refinement ensures accuracy and is to be replaced once PaddlePaddle promotes the changes from the develop branch to a stable release. Related to PR in PaddlePaddle: PaddlePaddle/Paddle#56992
…ss the correct dtypes for input/labels.
@kurshakuz Can you please provide any feedback on this ? Example: E AssertionError: the results from backend paddle and ground truth framework torch do not match
E [0.]!=[-0.6931472]
E
E
E Falsifying example: test_poisson_nll_loss(
E backend_fw='paddle',
E on_device='cpu',
E dtype_input_target=(['float32', 'float32'],
E [array([2.], dtype=float32), array([2.], dtype=float32)]),
E log_input=False,
E full=False,
E epsilon=1e-08,
E reduction='none',
E test_flags=FunctionTestFlags(
E ground_truth_backend='torch',
E num_positional_args=0,
E with_out=False,
E instance_method=False,
E test_gradients=True,
E test_compile=False,
E as_variable=[False],
E native_arrays=[False],
E container=[False],
E precision_mode=False,
E ),
E fn_name='poisson_nll_loss',
E ) But when I call my function with the same inputs for which it is showing failures, I get same results across all frameworks. ivy.set_backend("tensorflow")
ivy.set_inplace_mode('strict')
x = ivy.array([2.], dtype=ivy.float32)
y = ivy.array([2.], dtype=ivy.float32)
z = ivy.poisson_nll_loss(x, y, log_input=False, reduction="none")
print(f"TF backend:{z}")#TF backend:ivy.array([0.61370564])
ivy.set_backend("torch")
ivy.set_inplace_mode('strict')
x = ivy.array([2.], dtype=ivy.float32)
y = ivy.array([2.], dtype=ivy.float32)
z = ivy.poisson_nll_loss(x, y, log_input=False, reduction="none")
print(f"torch_backend{z}")#torch_backendivy.array([0.61370564])
ivy.set_backend("paddle")
ivy.set_inplace_mode('strict')
x = ivy.array([2.], dtype=ivy.float32)
y = ivy.array([2.], dtype=ivy.float32)
z = ivy.poisson_nll_loss(x, y, log_input=False, reduction="none")
print(f"paddle_backend{z}") #paddle_backendivy.array([0.61370564])
print("TORCH NATIVE API")
x = torch.tensor([2.], dtype=torch.float32)
y = torch.tensor([2.], dtype=torch.float32)
z = torch.nn.functional.poisson_nll_loss(x, y, log_input=False, reduction="none")
print(f"TORCH NATIVE: {z}") #TORCH NATIVE: tensor([0.6137]) |
…generation for testing due to numeric instability.
Update: So, setting |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey! I am very sorry for such delay this PR somehow slipped under the radar. Would you mind removing all changes that are unrelated to the function you are adding?
sure! |
Hey @kurshakuz . I removed the out of scope functionality as discussed with Ved and Haider at discord.(refs:LINK) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thanks for your contribution!
Thanks for the review @kurshakuz ! |
Close #21727