Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ASoC: max98363: Fix "Unbalanced pm_runtime_enable!" warning #4441

Closed
wants to merge 1 commit into from

Conversation

yongzhi1
Copy link

Do not call pm_runtime_enable() if it's already enabled.

[ 159.241749] PM: resume of devices complete after 879.236 msecs [ 159.244654] max98363 sdw:2:019f:8363:00:1: Unbalanced pm_runtime_enable! [ 159.245100] max98363 sdw:2:019f:8363:00:0: Unbalanced pm_runtime_enable! [ 170.636884] PM: Finishing wakeup.
[ 170.636889] OOM killer enabled.
[ 170.640395] Restarting tasks ...

Do not call pm_runtime_enable() if it's already enabled.

[  159.241749] PM: resume of devices complete after 879.236 msecs
[  159.244654] max98363 sdw:2:019f:8363:00:1: Unbalanced pm_runtime_enable!
[  159.245100] max98363 sdw:2:019f:8363:00:0: Unbalanced pm_runtime_enable!
[  170.636884] PM: Finishing wakeup.
[  170.636889] OOM killer enabled.
[  170.640395] Restarting tasks ...

Signed-off-by: Yong Zhi <[email protected]>
@@ -185,7 +185,8 @@ static int max98363_io_init(struct sdw_slave *slave)
/* make sure the device does not suspend immediately */
pm_runtime_mark_last_busy(dev);

pm_runtime_enable(dev);
if (!pm_runtime_enabled(dev))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yongzhi1 this looks a bit fishy with the runtime PM enable sequence and first_hw_init usage here.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I agree, if pm_runtime_enable() is called by the codec driver, should there be a matching pm_runtime_disable() called in the same driver?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yongzhi1 it's not an issue with pm_runtime_disable() but rather the fact that this entire block is gated by the test on first_hw_init. I don't see how it's possible to execute this block twice.

Copy link
Author

@yongzhi1 yongzhi1 Jun 28, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, the above line is called once, and I can repro only once upon reboot with S0ix test "suspend_stress_test -c1"

[   79.258335] max98363 sdw:2:019f:8363:00:1: Bus clash detected before INT mask is enabled
[   79.258589] max98363 sdw:2:019f:8363:00:1: Unbalanced pm_runtime_enable!
[   79.258840] max98363 sdw:2:019f:8363:00:0: Bus clash detected before INT mask is enabled
[   79.259113] max98363 sdw:2:019f:8363:00:0: Unbalanced pm_runtime_enable!
[   90.705383] OOM killer enabled.
[   90.708888] Restarting tasks ..

Will re-test with #4345 and close this one.

@yongzhi1 yongzhi1 marked this pull request as draft June 28, 2023 00:59
@yongzhi1
Copy link
Author

With #4345, not able to repro two issues:

  1. Unbalanced pm_runtime_enable! warning
  2. WARNON for invalid op at:
    [ 1600.973403] ? regcache_cache_only+0x52/0x9c
    [ 1600.973407] ? report_bug+0xd5/0x16a
    [ 1600.973415] ? handle_bug+0x41/0x67
    [ 1600.973422] ? exc_invalid_op+0x1b/0x4c
    [ 1600.973428] ? asm_exc_invalid_op+0x16/0x20
    [ 1600.973437] ? acpi_dev_resume+0x57/0x57
    [ 1600.973442] ? regmap_unlock_spinlock+0x13/0x13
    [ 1600.973450] ? regcache_cache_only+0x52/0x9c
    [ 1600.973456] max98363_suspend+0x1b/0x2d [snd_soc_max98363

@yongzhi1 yongzhi1 closed this Jun 28, 2023
@yongzhi1 yongzhi1 deleted the unbalanced branch June 29, 2023 00:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants