cyrus perdtion , connections not dying

Ian Eiloart iane at sussex.ac.uk
Tue Apr 8 06:44:44 EDT 2008



--On 7 April 2008 10:20:33 -0700 Andrew Morgan <morgan at orst.edu> wrote:

> On Mon, 7 Apr 2008, Ian Eiloart wrote:
> ......
>>
>> Is there a way to limit Cyrus process sizes at all? I guess I can take a
>> look at my compilation options to try to reduce the starting size, but
>> can I limit the process growth?
>
> How are you calculating the process size?  I'm curious to compare it with
> my Cyrus installation.
>
>  	Andy

I'm using the RSIZE reported in top. Here, I've got two users on a 
development machine:

top -U cyrus -o rsize
% top -U cyrus -o rsize -l 1
Processes:  100 total, 4 running, 2 stuck, 94 sleeping... 292 threads 
11:27:10

Load Avg:  1.46,  1.38,  1.38    CPU usage:  7.69% user, 46.15% sys, 46.15% 
idle
SharedLibs: num =    7, resident =   67M code, 1600K data, 5088K linkedit.
MemRegions: num =  9491, resident =  237M +   22M private,  124M shared.
PhysMem:  658M wired,  561M active,   28M inactive, 1247M used, 2799M free.
VM: 4439M + 372M   105234(0) pageins, 0(0) pageouts

  PID COMMAND      %CPU   TIME   #TH #PRTS #MREGS RPRVT  RSHRD  RSIZE  VSIZE
   71 master       0.0%  1:50:34   1    15+    47   70M+ 2364K+   70M+ 
84M+
30219 proxyd       0.0%  0:01.78   1    15+   105  672K+ 8168K+ 3552K+ 
74M+
 1076 proxyd       0.0%  0:00.83   1    15+   110  524K+ 8176K+ 3388K+ 
74M+
 1093 proxyd       0.0%  0:02.56   1    15+   110  516K+ 8176K+ 3332K+ 
74M+
   94 proxyd       0.0%  0:00.95   1    15+   110  524K+ 8176K+ 3332K+ 
74M+
35737 proxyd       0.0%  0:00.37   1    15+   104  660K+ 8168K+ 3292K+ 
74M+
 7483 proxyd       0.0%  0:00.19   1    15+   107  408K+ 8172K+ 2040K+ 
74M+
   84 proxyd       0.0%  0:00.02   1    15+   107  408K+ 8172K+ 2040K+ 
74M+
53828 proxyd       0.0%  0:00.27   1    15+   101  464K+ 8164K+ 2024K+ 
74M+
51026 proxyd       0.0%  0:00.27   1    15+   101  464K+ 8164K+ 2024K+ 
74M+
46021 proxyd       0.0%  0:00.27   1    15+   101  464K+ 8164K+ 2024K+ 
74M+
44932 proxyd       0.0%  0:00.23   1    15+   101  464K+ 8164K+ 2024K+ 
74M+
12886 proxyd       0.0%  0:00.20   1    15+   101  464K+ 8164K+ 2024K+ 
74M+
  102 proxyd       0.0%  0:00.02   1    15+   107  404K+ 8172K+ 2024K+ 
74M+
  100 proxyd       0.0%  0:00.02   1    15+   107  404K+ 8172K+ 2024K+ 
74M+
   95 proxyd       0.0%  0:00.02   1    15+   107  404K+ 8172K+ 2024K+ 
74M+
   93 proxyd       0.0%  0:00.02   1    15+   107  404K+ 8172K+ 2024K+ 
74M+
   83 proxyd       0.0%  0:00.02   1    15+   107  404K+ 8172K+ 2024K+ 
74M+
57073 mupdate      0.0%  0:00.12   1    14+    23  136K+ 1216K+  604K+ 
19M+
57074 mupdate      0.0%  0:00.05   1    13+    23  128K+ 1216K+  588K+ 
19M+
   88 launchd      0.0% 32:36.05   3    26+    26  120K+  296K+  464K+ 
18M+

And, these are the first few processes on a production server. There are 
actually 844 processes altogether, but it's a light day.
 % top -U nobody -o rsize -l 1 | grep perdition
10080 perdition    0.0%  0:00.01   1    15    56   348K  3.91M  1.67M 
32.9M
10067 perdition    0.0%  0:00.01   1    15    56   348K  3.91M  1.67M 
32.9M
10054 perdition    0.0%  0:00.01   1    15    56   348K  3.91M  1.67M 
32.9M


-- 
Ian Eiloart
IT Services, University of Sussex
x3148


More information about the Info-cyrus mailing list