Problem :
I am timing some code and I would like to tell how much of the time taken is due to reading the data in from disk. I don’t believe the result that time
gives me. For example, I have a 1.3GB file and if I run wc
I get
time wc largefile.file
50000000 150000000 1316665179 largefile.file
real 0m26.835s
user 0m18.363s
sys 0m0.495s
It can’t possibly have taken < 0.5 seconds to read in the file from my old hard drive.
Is there a reliable way to tell how much of the time was due to I/O?
Further details for why I don’t see how to interpret time
. If I do
time cat largefile.file > /dev/null
real 0m24.230s
user 0m0.060s
sys 0m1.473s
then it is tempting to say that about 22.5 seconds are spent on I/O. But the wc
figure from above implies that it is 8 seconds. These two figures are not consistent.
Solution :
sys
means cpu time spent in-kernel, but you want io-wait time.
Googling turned up another stack exchange answer pointing at “per-process iowait from /proc/$pid/stat“. (And maybe need to run the programmer under a debugger and set a breakpoint on exit()
/ _exit()
, so you can read out the iowait before the process goes away ?).
Often I just calculate it by subtracting the cpu time (user+sys) from the realtime. That assumes the process doesn’t wait for things you don’t count as “IO”.