-
Notifications
You must be signed in to change notification settings - Fork 11.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[libc] Fix for unused variable warning #98086
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This fixes the `unused variable 'new_inner_size'` warning that arises when `new_inner_size` is only used by `LIBC_ASSERT` by performing the calculation directly in the macro.
@llvm/pr-subscribers-libc Author: Caslyn Tonelli (Caslyn) ChangesThis fixes the Full diff: https://github.com/llvm/llvm-project/pull/98086.diff 1 Files Affected:
diff --git a/libc/src/__support/block.h b/libc/src/__support/block.h
index 026ea9063f416..e1b7aeaaf813c 100644
--- a/libc/src/__support/block.h
+++ b/libc/src/__support/block.h
@@ -442,8 +442,7 @@ Block<OffsetType, kAlign>::allocate(Block *block, size_t alignment,
if (!info.block->is_usable_space_aligned(alignment)) {
size_t adjustment = info.block->padding_for_alignment(alignment);
- size_t new_inner_size = adjustment - BLOCK_OVERHEAD;
- LIBC_ASSERT(new_inner_size % ALIGNMENT == 0 &&
+ LIBC_ASSERT((adjustment - BLOCK_OVERHEAD) % ALIGNMENT == 0 &&
"The adjustment calculation should always return a new size "
"that's a multiple of ALIGNMENT");
|
PiJoules
approved these changes
Jul 8, 2024
aaryanshukla
pushed a commit
to aaryanshukla/llvm-project
that referenced
this pull request
Jul 14, 2024
This fixes the `unused variable 'new_inner_size'` warning that arises when `new_inner_size` is only used by `LIBC_ASSERT` by performing the calculation directly in the macro.
mysterymath
added a commit
to mysterymath/llvm-project
that referenced
this pull request
Jul 30, 2024
This applies a standard trick from Knuth for storing boundary tags with only one word of overhead for allocated blocks. The prev_ block is now only valid if the previous block is free. This is safe, since only coalescing with a free node requires walking the blocks backwards. To allow determining whether it's safe to traverse backwards, the used flag is changed to a prev_free flag. Since it's still possible to unconditionally traverse forward, the prev_free flag for the next block can be used wherever the old used flag is, so long as there is always a next block. To ensure there is always a next block, a sentinel last block is added at the end of the range of blocks. Due to the above, this costs only a single word per heap. This sentinel essentially just stores whether the last real block of the heap is free. The sentinel is always considered used and to have a zero inner size. This completes the block optimizations needed to address llvm#98086. The block structure should now be size-competitive with dlmalloc, although there are still a couple of broader fragmentation concerns to address.
mysterymath
added a commit
that referenced
this pull request
Aug 5, 2024
This applies a standard trick from Knuth for storing boundary tags with only one word of overhead for allocated blocks. The prev_ block is now only valid if the previous block is free. This is safe, since only coalescing with a free node requires walking the blocks backwards. To allow determining whether it's safe to traverse backwards, the used flag is changed to a prev_free flag. Since it's still possible to unconditionally traverse forward, the prev_free flag for the next block can be used wherever the old used flag is, so long as there is always a next block. To ensure there is always a next block, a sentinel last block is added at the end of the range of blocks. Due to the above, this costs only a single word per heap. This sentinel essentially just stores whether the last real block of the heap is free. The sentinel is always considered used and to have a zero inner size. This completes the block optimizations needed to address #98086. The block structure should now be size-competitive with dlmalloc, although there are still a couple of broader fragmentation concerns to address.
banach-space
pushed a commit
to banach-space/llvm-project
that referenced
this pull request
Aug 7, 2024
This applies a standard trick from Knuth for storing boundary tags with only one word of overhead for allocated blocks. The prev_ block is now only valid if the previous block is free. This is safe, since only coalescing with a free node requires walking the blocks backwards. To allow determining whether it's safe to traverse backwards, the used flag is changed to a prev_free flag. Since it's still possible to unconditionally traverse forward, the prev_free flag for the next block can be used wherever the old used flag is, so long as there is always a next block. To ensure there is always a next block, a sentinel last block is added at the end of the range of blocks. Due to the above, this costs only a single word per heap. This sentinel essentially just stores whether the last real block of the heap is free. The sentinel is always considered used and to have a zero inner size. This completes the block optimizations needed to address llvm#98086. The block structure should now be size-competitive with dlmalloc, although there are still a couple of broader fragmentation concerns to address.
kstoimenov
pushed a commit
to kstoimenov/llvm-project
that referenced
this pull request
Aug 15, 2024
This applies a standard trick from Knuth for storing boundary tags with only one word of overhead for allocated blocks. The prev_ block is now only valid if the previous block is free. This is safe, since only coalescing with a free node requires walking the blocks backwards. To allow determining whether it's safe to traverse backwards, the used flag is changed to a prev_free flag. Since it's still possible to unconditionally traverse forward, the prev_free flag for the next block can be used wherever the old used flag is, so long as there is always a next block. To ensure there is always a next block, a sentinel last block is added at the end of the range of blocks. Due to the above, this costs only a single word per heap. This sentinel essentially just stores whether the last real block of the heap is free. The sentinel is always considered used and to have a zero inner size. This completes the block optimizations needed to address llvm#98086. The block structure should now be size-competitive with dlmalloc, although there are still a couple of broader fragmentation concerns to address.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This fixes the
unused variable 'new_inner_size'
warning that arises whennew_inner_size
is only used byLIBC_ASSERT
by performing the calculation directly in the macro.