UC Davis Cyrus Incident September 2007

Rob Mueller robm at fastmail.fm
Wed Oct 17 20:53:12 EDT 2007


> could someone whip up a small test that could be used to check different
> operating systems (and filesystems) for this concurrancy problem?

Not a bad idea. I was able to throw something together in about half an hour 
with perl. See attached. It requires the Benchmark, Time::HiRes and 
Sys::Mmap modules to be installed.

Here's a run on linux 2.6.20.11 on a 3 year old 2.4Ghz Xeon machine. Ignore 
the "@ xxx/s" results, they appear to be bogus, probably because Benchmark 
doesn't handle things that are mostly system time vs usertime well. I think 
comparing the "wallclock secs" between systems is probably the most 
interesting thing to look at. Actually thinking about it, it is probably 
worth tweaking the script to get proper xxx/s results by timing by hand. 
Anyway, here's some results for now...

$ perl ostest.pl
No mmaped file

Time to fork + immediately reap 1 child, 0 other children
timethis 1000: 1.8103 wallclock secs ( 0.06 usr  0.15 sys +  0.43 cusr  1.17 
csys =  1.81 CPU) @ 4761.90/s (n=1000)
Time to fork + immediately reap 1 child, 2000 other children
timethis 1000: 2.10359 wallclock secs ( 0.03 usr  0.42 sys +  0.49 cusr 
1.16 csys =  2.10 CPU) @ 2222.22/s (n=1000)
Time to fork + immediately reap 1 child, 4000 other children
timethis 1000: 2.46765 wallclock secs ( 0.03 usr  0.54 sys +  0.69 cusr 
1.20 csys =  2.46 CPU) @ 1754.39/s (n=1000)
Time to fork + immediately reap 1 child, 6000 other children
timethis 1000: 2.78328 wallclock secs ( 0.05 usr  1.14 sys +  0.45 cusr 
1.14 csys =  2.78 CPU) @ 840.34/s (n=1000)
Time to fork + immediately reap 1 child, 8000 other children
timethis 1000: 3.11913 wallclock secs ( 0.02 usr  0.73 sys +  0.79 cusr 
1.57 csys =  3.11 CPU) @ 1333.33/s (n=1000)
Time to fork + immediately reap 1 child, 10000 other children
timethis 1000: 3.4374 wallclock secs ( 0.03 usr  2.45 sys +  0.43 cusr  0.52 
csys =  3.43 CPU) @ 403.23/s (n=1000)
Killing 12000 children
timethis 1: 20.114 wallclock secs ( 0.15 usr 14.02 sys +  8.42 cusr 17.64 
csys = 40.23 CPU) @  0.07/s (n=1)
            (warning: too few iterations for a reliable count)

Time to kill + reap 200 child processes, 2000 other children
timethis 200: 0.324761 wallclock secs ( 0.00 usr  0.09 sys +  0.11 cusr 
0.30 csys =  0.50 CPU) @ 2222.22/s (n=200)
            (warning: too few iterations for a reliable count)
Time to kill + reap 200 child processes, 4000 other children
timethis 200: 0.378872 wallclock secs ( 0.00 usr  0.22 sys +  0.11 cusr 
0.22 csys =  0.55 CPU) @ 909.09/s (n=200)
            (warning: too few iterations for a reliable count)
Time to kill + reap 200 child processes, 6000 other children
timethis 200: 0.45181 wallclock secs ( 0.00 usr  0.32 sys +  0.19 cusr  0.26 
csys =  0.77 CPU) @ 625.00/s (n=200)
            (warning: too few iterations for a reliable count)
Time to kill + reap 200 child processes, 8000 other children
timethis 200: 0.457209 wallclock secs ( 0.00 usr  0.42 sys +  0.18 cusr 
0.31 csys =  0.91 CPU) @ 476.19/s (n=200)
            (warning: too few iterations for a reliable count)
Time to kill + reap 200 child processes, 10000 other children
timethis 200: 0.527018 wallclock secs ( 0.00 usr  0.51 sys +  0.18 cusr 
0.28 csys =  0.97 CPU) @ 392.16/s (n=200)
            (warning: too few iterations for a reliable count)
Killing 10000 children
timethis 1: 17.0475 wallclock secs ( 0.15 usr 10.09 sys +  7.52 cusr 13.82 
csys = 31.58 CPU) @  0.10/s (n=1)
            (warning: too few iterations for a reliable count)

Creating and mmaping 100M file

Time to fork + immediately reap 1 child, 0 other children
timethis 1000: 2.4236 wallclock secs ( 0.02 usr  0.22 sys +  0.58 cusr  1.60 
csys =  2.42 CPU) @ 4166.67/s (n=1000)
Time to fork + immediately reap 1 child, 2000 other children
timethis 1000: 2.66041 wallclock secs ( 0.07 usr  0.51 sys +  0.69 cusr 
1.39 csys =  2.66 CPU) @ 1724.14/s (n=1000)
Time to fork + immediately reap 1 child, 4000 other children
timethis 1000: 3.07573 wallclock secs ( 0.05 usr  0.80 sys +  0.66 cusr 
1.56 csys =  3.07 CPU) @ 1176.47/s (n=1000)
Time to fork + immediately reap 1 child, 6000 other children
timethis 1000: 3.31961 wallclock secs ( 0.05 usr  1.03 sys +  0.55 cusr 
1.68 csys =  3.31 CPU) @ 925.93/s (n=1000)
Time to fork + immediately reap 1 child, 8000 other children
timethis 1000: 3.84754 wallclock secs ( 0.08 usr  1.83 sys +  0.82 cusr 
1.11 csys =  3.84 CPU) @ 523.56/s (n=1000)
Time to fork + immediately reap 1 child, 10000 other children
timethis 1000: 4.25169 wallclock secs ( 0.05 usr  1.34 sys +  1.05 cusr 
1.81 csys =  4.25 CPU) @ 719.42/s (n=1000)
Killing 12000 children
timethis 1: 23.6461 wallclock secs ( 0.16 usr 14.73 sys + 10.13 cusr 20.26 
csys = 45.28 CPU) @  0.07/s (n=1)
            (warning: too few iterations for a reliable count)

Time to kill + reap 200 child processes, 2000 other children
timethis 200: 0.449704 wallclock secs ( 0.01 usr  0.07 sys +  0.14 cusr 
0.26 csys =  0.48 CPU) @ 2500.00/s (n=200)
            (warning: too few iterations for a reliable count)
Time to kill + reap 200 child processes, 4000 other children
timethis 200: 0.407817 wallclock secs ( 0.00 usr  0.22 sys +  0.12 cusr 
0.36 csys =  0.70 CPU) @ 909.09/s (n=200)
            (warning: too few iterations for a reliable count)
Time to kill + reap 200 child processes, 6000 other children
timethis 200: 0.455511 wallclock secs ( 0.00 usr  0.33 sys +  0.16 cusr 
0.29 csys =  0.78 CPU) @ 606.06/s (n=200)
            (warning: too few iterations for a reliable count)
Time to kill + reap 200 child processes, 8000 other children
timethis 200: 0.497282 wallclock secs ( 0.00 usr  0.37 sys +  0.22 cusr 
0.27 csys =  0.86 CPU) @ 540.54/s (n=200)
            (warning: too few iterations for a reliable count)
Time to kill + reap 200 child processes, 10000 other children
timethis 200: 0.541416 wallclock secs ( 0.00 usr  0.51 sys +  0.18 cusr 
0.32 csys =  1.01 CPU) @ 392.16/s (n=200)
            (warning: too few iterations for a reliable count)
Killing 10000 children
timethis 1: 19.1751 wallclock secs ( 0.13 usr 10.22 sys +  7.89 cusr 16.27 
csys = 34.51 CPU) @  0.10/s (n=1)
            (warning: too few iterations for a reliable count)

Modifying random points in 100M mmaped file with 0 children

Modify 100 points
timethis 500: 0.194063 wallclock secs ( 0.14 usr +  0.05 sys =  0.19 CPU) @ 
2631.58/s (n=500)
            (warning: too few iterations for a reliable count)
Modify 200 points
timethis 500: 0.297212 wallclock secs ( 0.29 usr +  0.00 sys =  0.29 CPU) @ 
1724.14/s (n=500)
            (warning: too few iterations for a reliable count)
Modify 300 points
timethis 500: 0.424934 wallclock secs ( 0.42 usr +  0.00 sys =  0.42 CPU) @ 
1190.48/s (n=500)
            (warning: too few iterations for a reliable count)
Modify 400 points
timethis 500: 0.569864 wallclock secs ( 0.57 usr +  0.00 sys =  0.57 CPU) @ 
877.19/s (n=500)
            (warning: too few iterations for a reliable count)

Modifying random points in 100M mmaped file with 4000 children

Modify 100 points
timethis 500: 0.142124 wallclock secs ( 0.14 usr +  0.00 sys =  0.14 CPU) @ 
3571.43/s (n=500)
            (warning: too few iterations for a reliable count)
Modify 200 points
timethis 500: 0.281407 wallclock secs ( 0.28 usr +  0.00 sys =  0.28 CPU) @ 
1785.71/s (n=500)
            (warning: too few iterations for a reliable count)
Modify 300 points
timethis 500: 0.423914 wallclock secs ( 0.42 usr +  0.00 sys =  0.42 CPU) @ 
1190.48/s (n=500)
            (warning: too few iterations for a reliable count)
Modify 400 points
timethis 500: 0.561806 wallclock secs ( 0.56 usr +  0.00 sys =  0.56 CPU) @ 
892.86/s (n=500)
            (warning: too few iterations for a reliable count)
Killing 4000 children
timethis 1: 7.65335 wallclock secs ( 0.07 usr  1.70 sys +  3.19 cusr  7.30 
csys = 12.26 CPU) @  0.56/s (n=1)
            (warning: too few iterations for a reliable count)

Rob
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ostest.pl
Type: application/octet-stream
Size: 2997 bytes
Desc: not available
Url : http://lists.andrew.cmu.edu/pipermail/info-cyrus/attachments/20071018/1854b34a/attachment.obj 


More information about the Info-cyrus mailing list