Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slow write speeds #38

Open
D-o-d-o-x opened this issue Apr 2, 2021 · 14 comments
Open

Slow write speeds #38

D-o-d-o-x opened this issue Apr 2, 2021 · 14 comments
Labels
question Further information is requested

Comments

@D-o-d-o-x
Copy link

My first tier is on an SSD which normally gives me write speeds >300MB/s.
But if I test the write-speeds on the autotier-fs I only get ~100MB/s even though it is writing to the SSD.
(Tested via dd if=/dev/zero of=test.img bs=1G count=1 oflag=dsync)

Do I have a problem in my setup (or my testing) or is such an overhead (66% performance loss) expected?

@joshuaboud
Copy link
Member

Hi @D-o-d-o-x,
Just ran the same test (dd if=/dev/zero of=test.img bs=1G, count=1 oflag=dsync) on my own setup into an SSD through autotier and got a write speed of between 229 and 236 MB/s. Bypassing autotier, I get between 365 and 368 MB/s writes. Can you give me some info on your system, i.e. which linux distribution, output of autotierfs --version, model of SSD, filesystem on SSD? Some overhead is to be expected due to the fact that this filesystem is implemented with FUSE, however the overhead you are seeing is more than I would expect.

@joshuaboud joshuaboud added the question Further information is requested label Apr 7, 2021
@D-o-d-o-x
Copy link
Author

Linux distribution: Ubuntu 20.04.2 LTS
Linux Kernel: 5.4.0-70
Output of autotierfs --version: autotier 1.1.3 (installed using the deb-package)
The SSD used as the first tier is a 'LITEON CV1-8B256' (SATA over M.2 256GB); formatted as ext4.
All other tiers are btrfs. (One SSD and one HDD)

I also tested some other numbers / sizes of blocks:

1x1GB via dd if=/dev/zero of=test.img bs=1G count=1 oflag=dsync
SSD: 312MB/s
autotier: 102 MB/s

10X100MB via dd if=/dev/zero of=test.img bs=100MB count=10 oflag=dsync
SSD: 210MB/s
autotier: 99MB/s

1*10GB via dd if=/dev/zero of=test.img bs=10G count=1 oflag=dsync
SSD: 216MB/s
autotier: 82 MB/s

Using actual data (1GB file) (expected to be a bit slower; I'm copying from a slower SSD)
SSD: 285 MB/s
autotier: 100 MB/s

@D-o-d-o-x
Copy link
Author

I saw, you added 'Fuse library version' into the issue template.

So here is my output of apt list --installed | grep fuse (should be the newest versions in the Ubuntu-Repos)

exfat-fuse/focal,now 1.3.0-1 amd64 [installed,automatic]
fuse3/focal,now 3.9.0-2 amd64 [installed,automatic]
gvfs-fuse/focal,now 1.44.1-1ubuntu1 amd64 [installed]
libfuse2/focal,now 2.9.9-3 amd64 [installed]
libfuse3-3/focal,now 3.9.0-2 amd64 [installed,automatic]

@joshuaboud
Copy link
Member

Thank you for this information, we will investigate the issue further on our end. Just as a sanity check, could you attach your autotier.conf configuration? Just want to make sure that the SSD is actually the highest tier in the list. If the SSD is indeed the top tier, you can also check that your test file you're writing to is in the SSD tier with:

autotier which-tier /autotier/mountpoint/path/test.img

@D-o-d-o-x
Copy link
Author

Running autotier which-tier /mnt/autotier/test/test.img tells me that the file is indeed stored on the SSD (/mnt/local_autotier/test/test.img)
The test.img file also appears in the directory on the SSD.

I had problems getting mounting via fstab to work. (I could also give you more details on this, but I thought that would be an unrelated issue)
Instead the mounting is done via autotierfs /mnt/autotier -o allow_other,default_permissions after each boot.

My autotier.conf is attached:
autotier.conf.txt

@joshuaboud
Copy link
Member

If you have the time, could you open a separate issue for the fstab problems? Might make it easier to find in case other people run into the same issue. You're definitely right though, the write speeds you are seeing are lower than what I would expect. I will have to benchmark it under Ubuntu to see if I get different results, as most of the testing I do is on my Manjaro machine.

@gdyr
Copy link

gdyr commented Jul 20, 2022

@joshuaboud Was there some resolution for this? I've just deployed autotier and similar dd tests show:

  • writes to /mnt/ssd @ 1.3 GB/s
  • writes via autotier at 207 MB/s
    & the file is definitely landing in /mnt/ssd.

@joshuaboud
Copy link
Member

I'll do a quick investigation to see if I can recreate these results. In the mean time, here are some results from benchmarking I did back in October of 2021, showing a expected loss in performance (especially at small block sizes) through autotier compared to direct to the SSD. The SSD I used in those benchmarks was a Micron 5200 MTFDDAK3T8TDC and the HDD was a WDC WUH721816ALE6L4, both with XFS directly on the device (no partition), and both over SATA. Granted, these benchmarks were an experiment to see how autotier would perform as a sort of writeback cache, so no read benchmarks were done in this case. @gdyr, what block size did you use for those dd tests?

writeback_cache_speed
writeback_cache_iops

@joshuaboud joshuaboud reopened this Jul 20, 2022
@gdyr
Copy link

gdyr commented Jul 20, 2022

Cheers for looking into it! I was running the same speed test as OP (dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync).
The config I had was autotier sitting in front of two ZFS pools (one of 4x 2TB NVMe SSD's, one of 1x 10TB HDD).
This is a home server setup, so in the time since posting, I've tried TrueNAS (~600MB/s over SMB without tiering) and am now back on Storage Spaces (1.1GB/s over SMB with tiering - but with occasional crashes to 0.)
(Hopping around looking for the path of least resistance.)

@joshuaboud
Copy link
Member

Recreating this with the Micron 5200 MTFDDAK3T8TDC SSD, and autotier compiled from source on Manjaro with kernel 5.10.131-1-MANJARO:

Linux distribution: Ubuntu 20.04.2 LTS Linux Kernel: 5.4.0-70 Output of autotierfs --version: autotier 1.1.3 (installed using the deb-package) The SSD used as the first tier is a 'LITEON CV1-8B256' (SATA over M.2 256GB); formatted as ext4. All other tiers are btrfs. (One SSD and one HDD)

I also tested some other numbers / sizes of blocks:

1x1GB via dd if=/dev/zero of=test.img bs=1G count=1 oflag=dsync SSD: 312MB/s autotier: 102 MB/s

10X100MB via dd if=/dev/zero of=test.img bs=100MB count=10 oflag=dsync SSD: 210MB/s autotier: 99MB/s

1*10GB via dd if=/dev/zero of=test.img bs=10G count=1 oflag=dsync SSD: 216MB/s autotier: 82 MB/s

Using actual data (1GB file) (expected to be a bit slower; I'm copying from a slower SSD) SSD: 285 MB/s autotier: 100 MB/s

cmd target trials avg overhead
dd if=/dev/zero of=test.img bs=1G count=1 oflag=dsync SSD direct 366 MB/s, 360 MB/s, 366 MB/s 364 MB/s
dd if=/dev/zero of=test.img bs=1G count=1 oflag=dsync autotier 259 MB/s, 229 MB/s, 231 MB/s 239 MB/s -125, -34%
dd if=/dev/zero of=test.img bs=100MB count=10 oflag=dsync SSD direct 358 MB/s, 358 MB/s, 369 MB/s 361 MB/s
dd if=/dev/zero of=test.img bs=100MB count=10 oflag=dsync autotier 250 MB/s, 228 MB/s, 233 MB/s 237 MB/s -124, -34%
dd if=/dev/zero of=test.img bs=10G count=1 oflag=dsync SSD direct 361 MB/s, 361 MB/s, 372 MB/s 364 MB/s
dd if=/dev/zero of=test.img bs=10G count=1 oflag=dsync autotier 223 MB/s, 247 MB/s, 246 MB/s 238 MB/s -126, -35%

@joshuaboud
Copy link
Member

joshuaboud commented Jul 20, 2022

Another consideration to make is CPU bottlenecking due to Fuse. My workstation I've been testing on has an AMD EPYC 7281 processor, with 16 cores/32 threads at 2.1 GHz. @gdyr What do you have in your home server for hardware? Also, are the drives connected through SATA cables, M.2 SATA, NVMe, or through a HBA card?

@gdyr
Copy link

gdyr commented Jul 20, 2022

A Xeon E5-1620 v4 -
They're NVMe via a PCIe adaptor (x4x4x4x4 bifurcation).

@TheLinuxGuy
Copy link

During my own testing; I seen a much higher performance penalty by autotier when using NFS. When forcing 'sync' (not using 'async' mode in NFS server) the performance to write is 134MB/s - where raw performance fio to the array is 900MB/s write.

Also if using 'async' NFS I do see I am able to come close to 900MB/s write but then the data written to disk on the filesystem actually has a lot of data loss. 'fio' would do a 16GB file test but then on disk only 5GB of data would be there. Must be some bug in 'autotier' - only when I force NFS to behave async do I see the full 16GB fio file write to actually be copied but then performance is abysmal.

@TheLinuxGuy
Copy link

It's unfortunate that this project seems to be abandoned by the authors. It's really easy to setup and would be awesome for home use (unraid replacement); I think the only feasible alternative is mergerfs which is well maintained.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants