Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

initial version of shared memcached #329

Open
wants to merge 44 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
f3ad0b2
lib/vibrio: adding a hack that sends some network packets
achreto Oct 17, 2023
26a0525
tests/dhcp: statically configure all dhcp entries for the VMs
achreto Oct 17, 2023
1bfdc00
tests: tweak the sleep times in the benchmarks
achreto Oct 17, 2023
7012327
usr/rkapps: bump librettos version
achreto Oct 17, 2023
1189be5
tests: adding support for multinode test in rackscale runner
achreto Oct 17, 2023
5a0826a
tests: adding sharded memcached benchmark
achreto Oct 17, 2023
c37ea63
run.py: handle features of the usr & libs properly
achreto Oct 17, 2023
cdc9f73
run.py: always create a bridge and run dhcp on the bridge
achreto Oct 17, 2023
04620ac
tests: adding sharded memcached to the CI pipeline
achreto Oct 17, 2023
8c15421
run.py: adding --mid argument to set the machine id explicitly
achreto Oct 17, 2023
ff265f0
tests: set the machine id to 0 for the sharded tests
achreto Oct 17, 2023
ddd3fbb
tests: increase the core count for internal memcached benchmark
achreto Oct 17, 2023
ee7323d
tests: unify memcached memory/query config
achreto Oct 18, 2023
16c5d8e
tests: update memcached memory configuration
achreto Oct 19, 2023
7cec4ea
tests: fix compilation
achreto Oct 19, 2023
c183b39
tests: increase timeout for sharded nros
achreto Oct 19, 2023
9d99c70
lib/lineup: don't assert the lock owner
achreto Nov 1, 2023
1d9860d
tests: fixing a few things in the sharded nros case
achreto Nov 1, 2023
862b52c
tests: reduce the memory size for memcached to see if the tests runs
achreto Nov 1, 2023
1f3f186
bench: disable the sharded nros benchmark
achreto Nov 13, 2023
e15d8e7
Fixed formatting errors
hunhoffe Nov 19, 2023
fc31a02
Update memcache-bench git hash, detect memory allocation failures in …
hunhoffe Nov 20, 2023
19f5544
Add additional memcached benchmark memory for sharded linux
hunhoffe Nov 20, 2023
b0c21f8
Start to consolidate memcached helper functions
hunhoffe Nov 20, 2023
296b78a
Make memcached naming convention and output csv files consistent
hunhoffe Nov 20, 2023
8c59e8e
Update ci with new file names
hunhoffe Nov 20, 2023
ac20b6d
unify the clients/cores for the nr sharded with the rackscale version
achreto Nov 21, 2023
a98a3f8
benchmarks: apply formatter
achreto Nov 22, 2023
3ca169c
bench: use 4G/10M configuration for memcached
achreto Nov 22, 2023
f2ed2b1
memcached: 64G memory and 10M queries
achreto Nov 27, 2023
3b5ec98
fix compilation
achreto Nov 27, 2023
f7e3f71
increase timeout
achreto Nov 27, 2023
6085529
some fixes after rebasing on the large-shmem branch
achreto Nov 27, 2023
82c757d
memcached: increase memory size to avoid running out of memory
achreto Nov 27, 2023
bf776d4
correctly set timeout in rackscale runner
achreto Nov 28, 2023
a80b98c
committing the working memcached benchmark
achreto Nov 30, 2023
9f03c7d
bump memcached versions
achreto Dec 3, 2023
1c552bc
some more tweaks in the memcachd benchmark
achreto Dec 4, 2023
d06f171
memcached: setting the number of threads propery
achreto Dec 5, 2023
bdcf45e
memcached: match on multiple things to account for reordering
achreto Dec 5, 2023
a486aec
update librettors
achreto Dec 5, 2023
481ead0
updated memcached hashes to updated version, updated parse_memcached_…
zmckevitt Mar 12, 2024
fce5f9a
ignoring s10 memcached test
zmckevitt Mar 12, 2024
4cdaace
increased timeout for benchmarks on CI runner and ignoring leveldb be…
zmckevitt Mar 12, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .github/workflows/skylake2x-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -49,3 +49,4 @@ jobs:
bash scripts/ci.bash
env:
CI_MACHINE_TYPE: "skylake2x"
timeout-minutes: 600
3 changes: 2 additions & 1 deletion .github/workflows/skylake4x-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -48,4 +48,5 @@ jobs:
bash setup.sh
bash scripts/ci.bash
env:
CI_MACHINE_TYPE: "skylake4x"
CI_MACHINE_TYPE: "skylake4x"
timeout-minutes: 600
73 changes: 50 additions & 23 deletions kernel/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,7 @@ def get_network_config(workers):
config['tap{}'.format(2*i)] = {
'mid': i,
'mac': '56:b4:44:e9:62:d{:x}'.format(i),
'ip' : f"172.31.0.1{i}"
}
return config

Expand Down Expand Up @@ -106,6 +107,8 @@ def get_network_config(workers):
# DCM Scheduler arguments
parser.add_argument("--dcm-path",
help='Path of DCM jar to use (defaults to latest release)', required=False, default=None)
parser.add_argument("--mid",
help="Machine id to set for this instance", required=False, default=None)

# QEMU related arguments
parser.add_argument("--qemu-nodes", type=int,
Expand Down Expand Up @@ -215,8 +218,10 @@ def build_kernel(args):
build_args = ['build', '--target', KERNEL_TARGET]
if args.no_kfeatures:
build_args += ["--no-default-features"]
log(" - enable feature --no-default-features")
for feature in args.kfeatures:
build_args += ['--features', feature]
log(" - enable feature {}".format(feature))
build_args += CARGO_DEFAULT_ARGS
build_args += CARGO_NOSTD_BUILD_ARGS
if args.verbose:
Expand All @@ -233,6 +238,16 @@ def build_user_libraries(args):
build_args += ["--features", "rumprt"]
if args.nic == "virtio-net-pci":
build_args += ["--features", "virtio"]
log(" - enable feature virtio")

for featurelist in args.ufeatures:
for feature in featurelist.split(',') :
if ':' in feature:
mod_part, feature_part = feature.split(':')
if "libvibrio" == mod_part:
log(" - enable feature {}".format(feature_part))
build_args += ['--features', feature_part]

# else: use e1000 / wm0
build_args += CARGO_DEFAULT_ARGS
build_args += CARGO_NOSTD_BUILD_ARGS
Expand All @@ -259,18 +274,21 @@ def build_userspace(args):
if not (USR_PATH / module).exists():
log("User module {} not found, skipping.".format(module))
continue
log("build user-space module {}".format(module))
with local.cwd(USR_PATH / module):
with local.env(RUSTFLAGS=USER_RUSTFLAGS):
with local.env(RUST_TARGET_PATH=USR_PATH.absolute()):
build_args = build_args_default.copy()
for feature in args.ufeatures:
if ':' in feature:
mod_part, feature_part = feature.split(':')
if module == mod_part:
build_args += ['--features', feature_part]
else:
build_args += ['--features', feature]
log("Build user-module {}".format(module))
for featurelist in args.ufeatures:
for feature in featurelist.split(',') :
if ':' in feature:
mod_part, feature_part = feature.split(':')
if module == mod_part:
log(" - enable feature {}".format(feature_part))
build_args += ['--features', feature_part]
else:
log(" - enable feature {}".format(feature))
build_args += ['--features', feature]
if args.verbose:
print("cd {}".format(USR_PATH / module))
print("RUSTFLAGS={} RUST_TARGET_PATH={} cargo ".format(
Expand Down Expand Up @@ -316,7 +334,10 @@ def deploy(args):
# Append globally unique machine id to cmd (for rackscale)
# as well as a number of workers (clients)
if args.cmd and NETWORK_CONFIG[args.tap]['mid'] != None:
args.cmd += " mid={}".format(NETWORK_CONFIG[args.tap]['mid'])
if args.mid is None :
args.cmd += " mid={}".format(NETWORK_CONFIG[args.tap]['mid'])
else :
args.cmd += f" mid={args.mid}"
if is_controller or is_client:
args.cmd += " workers={}".format(args.workers)
# Write kernel cmd-line file in ESP dir
Expand Down Expand Up @@ -733,20 +754,26 @@ def configure_network(args):
sudo[ip[['link', 'set', '{}'.format(tap), 'down']]](retcode=(0, 1))
sudo[ip[['link', 'del', '{}'.format(tap)]]](retcode=(0, 1))

# Need to find out how to set default=True in case workers are >0 in `args`
if (not 'workers' in args) or ('workers' in args and args.workers <= 1):
sudo[tunctl[['-t', args.tap, '-u', user, '-g', group]]]()
sudo[ifconfig[args.tap, NETWORK_INFRA_IP]]()
sudo[ip[['link', 'set', args.tap, 'up']]](retcode=(0, 1))
else:
assert args.workers <= MAX_WORKERS, "Too many workers, can't configure network"
sudo[ip[['link', 'add', 'br0', 'type', 'bridge']]]()
sudo[ip[['addr', 'add', NETWORK_INFRA_IP, 'brd', '+', 'dev', 'br0']]]()
for _, ncfg in zip(range(0, args.workers), NETWORK_CONFIG):
sudo[tunctl[['-t', ncfg, '-u', user, '-g', group]]]()
sudo[ip[['link', 'set', ncfg, 'up']]](retcode=(0, 1))
sudo[brctl[['addif', 'br0', ncfg]]]()
sudo[ip[['link', 'set', 'br0', 'up']]](retcode=(0, 1))

# figure out how many workers we have
workers = 1
if 'workers' in args:
workers = args.workers

# create the bridge
sudo[ip[['link', 'add', 'br0', 'type', 'bridge']]]()
sudo[ip[['addr', 'add', NETWORK_INFRA_IP, 'brd', '+', 'dev', 'br0']]]()

# add a network interface for every worker there is
for _, ncfg in zip(range(0, workers), NETWORK_CONFIG):
sudo[tunctl[['-t', ncfg, '-u', user, '-g', group]]]()
sudo[ip[['link', 'set', ncfg, 'up']]](retcode=(0, 1))
sudo[brctl[['addif', 'br0', ncfg]]]()

# set the link up
sudo[ip[['link', 'set', 'br0', 'up']]](retcode=(0, 1))

sudo[brctl[['setageing', 'br0', 600]]]()


def configure_dcm_scheduler(args):
Expand Down
36 changes: 33 additions & 3 deletions kernel/tests/dhcpd.conf
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,11 @@ option domain-name-servers ns1.example.org, ns2.example.org;
ddns-update-style none;

subnet 172.31.0.0 netmask 255.255.255.0 {
range 172.31.0.12 172.31.0.16;
range 172.31.0.118 172.31.0.118;
option routers 172.31.0.20;
option subnet-mask 255.255.255.0;
default-lease-time 1;
max-lease-time 1;
default-lease-time 1000;
max-lease-time 1000;
}

host nrk1 {
Expand All @@ -20,4 +20,34 @@ host nrk1 {
host nrk2 {
hardware ethernet 56:b4:44:e9:62:d1;
fixed-address 172.31.0.11;
}

host nrk3 {
hardware ethernet 56:b4:44:e9:62:d2;
fixed-address 172.31.0.12;
}

host nrk4 {
hardware ethernet 56:b4:44:e9:62:d3;
fixed-address 172.31.0.13;
}

host nrk5 {
hardware ethernet 56:b4:44:e9:62:d4;
fixed-address 172.31.0.14;
}

host nrk6 {
hardware ethernet 56:b4:44:e9:62:d5;
fixed-address 172.31.0.15;
}

host nrk7 {
hardware ethernet 56:b4:44:e9:62:d6;
fixed-address 172.31.0.16;
}

host nrk8 {
hardware ethernet 56:b4:44:e9:62:d7;
fixed-address 172.31.0.17;
}
Loading
Loading