Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dev/rackscale shmem allocators #319

Merged
merged 10 commits into from
Jul 16, 2023
Merged

Conversation

hunhoffe
Copy link
Collaborator

@hunhoffe hunhoffe commented Jul 6, 2023

This PR completes the process of using memory with NUMA affinity for rackscale benchmarks. Particularly, this includes:

  • Allocating shmem RPC buffers in client affinity shmem
  • Allowing clients and the controller to allocate memory from any affinity by creating a unique affinity NodeId for 1 shmem region per kernel. This allowed the client frame mapping state to be deleted, as that state is now embedded in the node id. This feature will be needed for dynamic replication
  • Associate the ShmemAllocator used for ProcLog replicas with a shmem affinity - this can be used in the future when a replica may be used remotely (in the case of dynamic replication)
  • Specify a host physical numa node to assign to clients' non-shmem memory
  • Change rackscale benchmarks to run with 1, 2, 3, and 4 clients (on 4 socket machines) - this is becuase 4 clients may not be good to measure as one client will be co-located on the same NUMA node as the controller.
  • Fix a bug in rackscale baseline benchmark tests

Of note, log replicas were already allocated using local shmem, so no changes were made there.

@hunhoffe hunhoffe marked this pull request as ready for review July 6, 2023 21:53
@hunhoffe
Copy link
Collaborator Author

Found a bug. Will commit a fix shortly and wait to merge until after.

@hunhoffe
Copy link
Collaborator Author

Should be ready to go again

@hunhoffe hunhoffe merged commit f6f9c39 into master Jul 16, 2023
10 checks passed
@hunhoffe hunhoffe deleted the dev/rackscale-shmem-allocators branch July 16, 2023 21:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants