I'm rewriting the Rankforest back-end and have been profiling and benchmarking a test program in both C# and Ruby.
The program fetches a web page 10 times, running a regex on the text of the page 100 times for each fetch. It then reads about 2000 values from my local database, increments a counter as it does that, and finally performs 2000 inserts into the database using prepared statements.
For a while I was using time (the /usr/bin/time and not whatever default time bash is using) to track CPU usage and execution time. Here's a sample of how that looked for the C# version:
1.70user 0.11system 0:15.17elapsed 12%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+3840minor)pagefaults 0swaps
And for the Ruby version:
0.30user 0.04system 0:22.03elapsed 1%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+1859minor)pagefaults 0swaps
The C# version uses the Connector/Net driver (version 1.0.7 has a Mono dll).
The C# version was faster but seemed to keep a higher cpu usage, and glancing at the output of top as it ran revealed a slightly higher memory usage.
I wanted to take into account some of the shared libraries that each application was using, so I installed Exmap (you can apt-get this if you're running Debian/Ubuntu). Using the Effective Resident and Mapped columns, it appears that the Ruby version is a little lighter on memory. I was a little surprised by this and wonder if there's something I'm missing.
The relative sluggishness of Ruby in this case is acceptable, as the HTTP requests the application makes have to be throttled; I can simply make the sleep time a little less to factor in the extra time the processing is taking. When I do that the two perform in similar time (the C# one is still faster with the database inserts).