As anyone would understand, the more complex a scene, the more textures it uses etc the bigger the memory footprint. I mainly do my modelling and rendering between a MacBook Pro, and a Desktop PC using either Windows XP or Linux.
The Mac has 4GB of ram, when I bought it I did so with an upgrade because of the increasing complexity of scenes I was playing with, and just because usually having more ram can help system stability. The PC however despite its fairly reasonable specs at the time, was limited to 2GB ram due to the limitation of 32bit XP only being able to address 3GB. So, quite often if i was on the boarderline of being able to render something on my mac, i could get no speed increase by using the PC as a network render server.
Well not anymore! I bought 8GB of ram and maxed out the motherboard. After testing it all out in Memtest86 and playing with the frequency and timings I booted into linux and started doing what was previously impossible.
I started out with a cube, and used the greeble tool in blender several times to give me some crazy shape. While watching the memory usage of the scene in blender, I duplicated the now spiky cube a few times and filled the view with them. In total the memory usage just in blender was near 2.5GB… this would have been utterly impossible previously, and even a little dodgy on the mac.
Then I hit render. The scene took a good time to load but when it did it used about 4GB of ram… again, totally impossible previously. This was the result.
It contains over 11million verts and was rendered at 2048 × 1536 I got about 17,000 samples per second which again is amazing given the complexity. I will definitely be doing more scenes like this in the future 😀