Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix suspect crash cause in 5.0.0 #72

Merged
merged 1 commit into from
May 10, 2019
Merged

Fix suspect crash cause in 5.0.0 #72

merged 1 commit into from
May 10, 2019

Conversation

fddecc
Copy link
Contributor

@fddecc fddecc commented May 8, 2019

What did you change and why?

A change was introduced between 4.1.5 and 5.0.0 that is resulting in crashes. Multiple internal crash reports (as well as #58) indicate an issue on

.

Internal reports indicated EXC_BAD_ACCESS. This would be expected to occur in sync block in case self is de-allocated.

When comparing 4.1.5 to 5.0.0 (4.1.5...5.0.0#diff-77256cb830a28637414932596218ae34R125), relocated release of unmanaged self is suspect.

This pr partially reverts c301ea8.

Potential risks introduced?

Even more crashes?

What tests were performed (include steps)?

Additional code was introduced to manually release NTPConnection object (similar to what was done prior to this change) and was able to repro the crash. It goes away after the changes in this pr. But more comprehensive testing might be needed.

Checklist

  • Unit/UI tests have been written (if necessary)
  • Manually tested

@fddecc
Copy link
Contributor Author

fddecc commented May 9, 2019

For bit more context: my thinking is that close() invoked from stopQueue() in NTPClient was what was causing this. It would release NTPConnection object, and then the canRetry lockQueue.sync block would fail since it was referencing self that was deallocated. But bit difficult to determine. Commit where this change was introduced did not have a pr or additional details other than title, so bit hard to gage if we are re-introducing a major issue. However, given that this is fairly close to what's in 4.2.0 and that has been stable for us, it gives me some confidence that this won't be a regression.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants