How many php procs are running: ps -A | grep php-cgi | grep -v grep | wc -l
A box with a 1 minute load average of 6.92 should have been below 2 if it had a 2 cpu core.
To find how many cpu cores there are:
cat /proc/cpuinfo (or just type cat /proc/cpuinfo| grep processor| wc -l to get how many cpus)
4 processor core would look like
processor : 0
vendor_id : GenuineIntel
<---SNIP--->
processor : 1
vendor_id : GenuineIntel
<---SNIP--->
processor : 2
vendor_id : GenuineIntel
<---SNIP--->
processor : 3
<---SNIP--->
export TERM=linux (if you get an unknown terminal type error)
top - 14:34:28 up 65 days, 14:02, 1 user, load average: 6.92, 5.50, 3.17
Tasks: 173 total, 6 running, 167 sleeping, 0 stopped, 0 zombie
Cpu(s): 58.5%us, 11.9%sy, 0.0%ni, 2.0%id, 0.0%wa, 0.0%hi, 0.7%si, 26.9%st
Mem: 7885016k total, 7700116k used, 184900k free, 160232k buffers
Swap: 0k total, 0k used, 0k free, 2779440k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
31019 babsonco 20 0 374m 109m 49m R 44 1.4 0:13.43 php-cgi
31010 babsonco 20 0 374m 109m 49m R 37 1.4 0:14.32 php-cgi
30960 babsonco 20 0 374m 109m 49m R 36 1.4 0:18.35 php-cgi
31011 babsonco 20 0 374m 109m 49m R 30 1.4 0:14.65 php-cgi
30959 babsonco 20 0 371m 106m 49m R 30 1.4 0:18.51 php-cgi
28358 mysql 20 0 3807m 3.5g 5332 S 4 46.4 356:32.38 mysqld
24645 root 20 0 32008 6424 1400 S 3 0.1 8:17.50 ruby
1128 www-data 20 0 99.4m 5852 1436 S 2 0.1 0:01.12 apache2
28176 nobody 20 0 90564 52m 892 S 1 0.7 2:28.80 memcached
1109 www-data 20 0 99.2m 5724 1432 S 1 0.1 0:01.26 apache2
1140 www-data 20 0 99.3m 5776 1436 S 1 0.1 0:01.23 apache2
26947 root 20 0 178m 3944 1060 S 1 0.1 39:33.69 glusterfsd
1084 www-data 20 0 98.3m 4660 744 S 0 0.1 0:04.57 apache2
1087 www-data 20 0 99.5m 6016 1436 S 0 0.1 0:01.09 apache2
1124 www-data 20 0 99.3m 5724 1432 S 0 0.1 0:00.86 apache2
1127 www-data 20 0 99.3m 5832 1432 S 0 0.1 0:01.37 apache2
1129 www-data 20 0 99.2m 5680 1432 S 0 0.1 0:01.46 apache2
1145 www-data 20 0 99.2m 5764 1436 S 0 0.1 0:00.87 apache2
5145 root 20 0 174m 8900 1076 S 0 0.1 32:00.33 glusterfs
31060 babsonco 20 0 19224 1452 1060 R 0 0.0 0:00.06 top
1 root 20 0 23712 1448 760 S 0 0.0 6:13.42 init
2 root 20 0 0 0 0 S 0 0.0 0:01.22 kthreadd
Memory looks fine with about 3.051gb (3051) free for use by other programs even with this 100 concurrent person server load.
http://thecodecave.com/2012/02/22/understanding-free-memory-in-linux/
babsoncomm@staging-2521:~$ free -mt
total used free shared buffers cached
Mem: 7700 7530 170 0 156 2725
-/+ buffers/cache: 4648 3051
Swap: 0 0 0
Total: 7700 7530 170
NOTES FROM codecave.com:
# free -mt total used free shared buffers cached Mem: 7974 7921 53 0 27 2107 -/+ buffers/cache: 5786 2187 Swap: 5945 923 5022 Total: 13920 8844 5075 You get a similar misleading result, but you get to see the actual server condition too. As you can see, the first free value is very low and that's what is concerning you. However I want to draw your attention to the next line. That's really where you need to watch. If you look at the buffers/cache line, you can see that used value is 5786mb and we have 2107mb in the free column. That free column is the Free + Cached + buffers (plus/minus rounding error of less than 2 Kbytes). That's really the line that you need to watch. From that line we can tell that we have used 5.79gb out of the 7.97gb of total physical memory already used by programs. We can also see that we have 2.19gb of RAM that is in the cached pool that is available for usage. As I mentioned before, Linux doesn't usually let memory go to waste. So you will watch that free number drop on the first line down to the double digits, but even then the cached value will be around 2gb. That means is we have roughly 2gb of memory available for programs right now. If a program needs more, it will pull it out of the cached memory pool and even after that, it will use the swap space before it is really out of memory and that is an additional 5gb. To look at how much memory each program is using, I use this line: # ps aux|head -1;ps aux | sort -nr -k 4 | head -20 USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND 102 4123 0.4 10.8 1196416 885840 ? Ssl Feb18 25:26 memcached [...] mysql 17929 7.6 3.0 949372 248040 pts/3 Sl 06:05 23:32 /usr/libexec/mysqld [...] root 7059 0.0 1.6 204360 138040 ? Ssl Feb18 0:28 /usr/sbin/clamd apache 26931 1.2 0.8 225096 69956 ? S 11:06 0:05 /usr/bin/php-cgi apache 26631 1.0 0.8 223304 66856 ? S 11:04 0:05 /usr/bin/php-cgi apache 26458 1.5 0.8 223488 66824 ? S 11:03 0:09 /usr/bin/php-cgi apache 23879 0.5 0.8 225068 68376 ? S 10:48 0:08 /usr/bin/php-cgi root 26404 0.0 0.7 131956 58708 ? S Feb20 0:04 spamd child root 24156 0.1 0.7 136320 63308 ? S 07:04 0:28 spamd child apache 26937 2.5 0.7 221812 59788 ? S 11:06 0:11 /usr/bin/php-cgi apache 26567 0.6 0.7 222416 61756 ? S 11:04 0:03 /usr/bin/php-cgi apache 26405 0.0 0.7 222748 58228 ? S 11:03 0:00 /usr/bin/php-cgi apache 23890 0.4 0.7 214040 57508 ? S 10:48 0:06 /usr/bin/php-cgi apache 23851 0.1 0.7 221972 58596 ? S 10:48 0:01 /usr/bin/php-cgi apache 17990 0.0 0.7 223916 58320 ? S 06:06 0:00 /usr/bin/php-cgi apache 17164 0.1 0.7 215152 58956 ? S 10:13 0:04 /usr/bin/php-cgi apache 14406 0.0 0.7 221164 63616 ? S Feb21 0:05 /usr/bin/php-cgi root 7099 0.0 0.6 124312 49792 ? Ss Feb18 0:03 /usr/bin/spamd [...] apache 26932 1.3 0.6 212336 52944 ? S 11:06 0:05 /usr/bin/php-cgi apache 26628 2.2 0.6 213964 55164 ? S 11:04 0:11 /usr/bin/php-cgi That shows you the 20 most memory intensive programs. Right now, on this server, the top to are memcached and mysqld - as it should be. Then there's a huge list of php-cgi instances prelaunched to handle an influx of connections. Also the spam checker coming up a few times. Most of the instances only take up ~220K, which isn't bad either. So from this, I can see that I have used a lot of memory in preparing many instances of php that are ready to go as connections come in. I also have APC installed and that is allowing the use of shared memory and that is reducing the overall footprint. All in all, while I am showing a really low free memory value on this server, I know I actually have more memory available and already have a lot of existing memory taken up in preparation for when I get a much heavier load. As I speak, the server is dealing with 630 connections quite nicely. To visually monitor memory usage, try this: watch -n 1 -d free -mt