This time around, I’m being proactive about programming caching into my code. From what I’ve read APC has the best performance in terms of object level caching, but Memcache is built with a distributed architecture allowing it to scale with relative ease. In any case, I’m using Memcache for object level caching, APC for opcode level caching and XDebug for performance profiling. I use http_load to test multiple fetches in parallel and to see the throughput of the server. I ran my code using 5 parallel fetches for a total of 1000 fetches on my development server. The improvement I found through APC (for opcode caching only) was drastic:
Without APC (comment out extension=apc.so):
root@fork:/home/dev/http_load# ./http_load -parallel 5 -fetches 1000 url_file 1000 fetches, 5 max parallel, 451000 bytes, in 58.1554 seconds 451 mean bytes/connection 17.1953 fetches/sec, 7755.09 bytes/sec msecs/connect: 0.154203 mean, 47.333 max, 0.031 min msecs/first-response: 290.431 mean, 13220.5 max, 57.739 min HTTP response codes: code 200 -- 1000
With APC (add extension=apc.so):
root@fork:/home/dev/http_load# ./http_load -parallel 5 -fetches 1000 url_file 1000 fetches, 5 max parallel, 451000 bytes, in 10.8297 seconds 451 mean bytes/connection 92.3386 fetches/sec, 41644.7 bytes/sec msecs/connect: 0.107207 mean, 0.205 max, 0.031 min msecs/first-response: 54.0108 mean, 6252.95 max, 10.541 min HTTP response codes: code 200 -- 1000
Fetches per second went from 17.2 to 92.3 and milliseconds per first response went from 290.4 to 54.0. Granted these results are somewhat anecdotal because their based on my application and my current dev-server setup. Any way you look at it though the improvement is very impressive.