Last week the Ruby world was upside-down and because of some security warning that Apple released about some Ruby-Security issue. It turns out that this is all wrong and not as bad as it seems. Sorry but the Ruby-Guys at Apple are _total_ Morons! And the Japanese as polite as they are, are just too kind! Thank you Matz! Apple deserves a slap across the face for this one!
This is so classical!
And from the Gentoo Bug List:
In a long-running ruby process with a highly dynamic object-space, we encountered performance degradation and finally memory-allocation failure due to heap fragmentation. The problem can be mitigated by linking ruby against ptmalloc3.
Hi all! I’m writing this mail in the hope that my experiences may point you in the right direction, if you ever encounter a similar problem. Naturally I would be delighted to read your comments and advice on my conclusions and the steps taken.
http://ch.oddb.org  provides information on the swiss health-care market. Behind an Apache/mod-ruby setup lies a single ruby-process, which acts as a DRb-Server. Predating Ruby on Rails, the application is based on self-baked libraries [2-4].
A couple of weeks ago we experienced a spike in user requests. Although the application seemed to scale well most of the time, we began experiencing outages after a couple of hours. Whenever that happened, CPU-Load rose to 100% and DRb-Requests were hanging, sometimes for several minutes. At the same time, memory usage started rising considerably. If left to run for enough time, the application would crash with a NoMemoryError: ‘Failed to allocate Memory’ – even though there was still plenty of Memory available in the system.
Thanks to Jamis Buck  and Mauricio Fernandez  I was able to determine that the application was stuck for several seconds in glibc’s realloc, which may be called (via ruby_xrealloc) from basically anywhere within ruby where a new or enlarged chunk of memory might be required.
Having stated the diagnosis: heap fragmentation , there were a couple of things I could try to improve the performance of our application, all revolving around the principle of creating fewer objects, and in particular fewer Strings, Arrays and Hashes. By eliminating a number of obvious suspects (mainly to do with the on-demand sorting of values stored in a large Hash), I was able to raise the life-expectancy of our application considerably – close, but no cigar.
And then – all praise bugzilla – I found a bugreport  describing almost exactly my problems and leading me to ptmalloc3 . Glibc’s malloc implementation is based on ptmalloc2, and may be replaced by simply linking ruby against ptmalloc3.
As far as I understand, ptmalloc3 does not eliminate heap fragmentation. However, due to the bit-wise tree employed in the newer version, it finds free chunks of the right size in shorter time by several orders of magnitude. Additionally, it seems that glibc 2.5 abandons its attempts to find a best-fit chunk after a while (possibly after 10000 tries), instead expanding the heap as long as possible and finally failing to allocate memory – causing first the fast rise in memory usage and later the observed NoMemoryError.
At this time, http://ch.oddb.org has run – powered by ruby and ptmalloc3 – for a little more than 24 hours without displaying any of the signs I have come to associate with heap fragmentation. Significantly less time is spent in allocating memory – and consequently in GC, and the overall memory-footprint has decreased by about 30%.
I hope this is of use – thanks in advance for any thoughts you want to share.
 Open Drug Database
 Object-Database Access and Object Cache
 State-Based Session Management
 Component-Based Html generator
 Inspecting a live ruby process, Jamis Buck
 Ruby live process introspection, Mauricio Fernandez
 Heap fragmentation, Bruno R. Preiss
 Glibc bugzilla report 4349, Mingzhou Sun, Tomash Brechko
 Ptmalloc home, Wolfram Gloger
Ok, this seems to kick some serious ass as far as our heap fragmentation at ODDB.org is concerned. Our CPU is not constantly at 99% anymore.
Ok, since we installed Ruby 1.8.6 the GC (Garbage Collection) does not take 50 secs (or more) to do its job when our Application is around 2 GB big. The time is down to about 20 secs – and – the Speed at witch the queries are delivered is up up up! Thank you for fixing this, Dear Ruby Community.
Update: I must actually elaborate a bit. The GC used to force us to do a restart because it took such a long time to do its job. With Ruby 1.8.6 the memory usage still “grows” throughout the day. This just has less impact on our Service as we have 12 GB of memory on your server. We still want to find out what increases the memory consumption of our software, though. We owe that answer to Ryan Davis.
I found some more interesting posts:
- Memory leaks in my site
- Rails memory usage case study
- Finding open file descriptors: ‘lsof’ is a neat utility for listing open ports, sockets and files on a per process basis.
- The WeakRef and WeakHash.
Still I put it as following: I believe Ruby has a serious memory leak problem that is not been taken seriously. It is not even only about the memory leak. It is the GC that takes to long once the memory reaches a certain seize. Once our Application uses more then 2 GB of Memory the GC takes more then 50 secs to do its job. Then we have to do a restart of our Application because our Application should not be unresponsive for the User for 50 secs or more. PS: We got 12 GB of memory for Ruby on our server.