-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fio writting twice the amount of data to the disk set by size #1726
Comments
If the file does not already exist fio lays it out before starting the workload. Do you still see the same increase in SMART values when the file already exists? Also, the amount of data written by the device (assuming it's an SSD) will be affected by garbage collection and to a smaller extent file system metadata. It's unreasonable to expect Total Host Writes to match the amount of writes issued by fio. |
I'll answer for Garbage collection first, writings made on Garbage collection should not be displayed on Host writes since it's firmware related writings, so I believe it's not that, even more because as I said on update if I copy and paste a 1GiB file generated by fio on SMART it only reports a 1GiB increase on host writes. About the files not existing before workload, yes they didn't exist there. So, layouting the file does make fio write twice the data? |
Ok, you are right about garbage collection. Try your job on a pre-existing file and report back. |
Can confirm that this is a problem while making calculations for and running traffic on block devices.
It looks like the total number of bytes being written is multiplied by the number of jobs where in previous versions the number of jobs would share the total number of bytes to write. @gustavo16a, can you test whether or not this is consistent with your windows block device or test file? |
|
Sorry guys for long time no response.
Regarding this, it seems that in pre-existing files this does not happen. Although, it's not useful to me because we have to delete the files created in our test. We accepted the behavior and move on with it.
Regarding the mentioned above, I think the |
Please acknowledge the following before creating a ticket
Description of the bug:
When I issue a command to fio to write random data the amount of data written in the disk reported by its SMART infos is twice the size of the command. With sequential writing the issue doesn't happen. I was wondering if this is something related to the process of creation of the random data?
Environment:
Windows 10
fio version: 3.36
Reproduction steps
Check the SMART info of the disk before:
issue the command to write 1GiB of random data:
fio --name=test --directory=D\:\ --ioengine=windowsaio --rw=randwrite --bs=4M --numjobs=1 --iodepth=1 --direct=1 --end_fsync=1 --size=1G --thread
I'm uploading the command result, in case that helps:
command_result.txt
Check SMART info of the disk after the command:
In my case, each unit of the total host writes is 32MiB written, so 601-538 = 63 -> 63*32 = 2016MiB, which is approximately 2GiB
I tried to overcome this using the options
number_ios
,io_size
,file_size
and so on. If I setnumber_ios=1
, then I got aproximatelly to 1GiB, but if I set thenumjobs
to a higher value, for example 10, then I got 6.48GiB written what would be less than the expected 10GiB for 10 jobs of 1GiB size.Update1: Just for info the created files from fio have 1GiB size, and if a copy and paste them on Windows the SMART info from the disk reports that 1GiB was written; so probably isn't something related to the files itself.
The text was updated successfully, but these errors were encountered: