LuxRender on the Raspberry Pi

[UPDATE] 15 April 2013 – Due to some drastic code changes in the lux and the recent work by the developers into making SLG an active render engine, it is likely that these instructions will not work for latest versions of the code.
Versions 1.1 of both Luxrays and luxrender will compile however. To get these, once you have cloned the repositories use the following commands
hg update v11
hg update luxrender_v1.1
Many thanks to Carlos for running through these instructions and informing me of the code break.



So from start to finish, this is how to get both Luxrays and Luxrender working on the Raspberry Pi Computer. You should note however that performance is very very slow, however for simple scenes it is able to still produce nice results, and if a bunch of them where networked together into a cluster… dare i say ‘bush?’ a classroom could give the same output as slow modern PC. (The arm is not optimized for multiple simultaneous operations so suffers alot compared to a modern x86)


1) Open a terminal and Install *some* of the dependancies of luxrender. Some of them are the wrong version and such we will have to build those ourselves.

>sudo apt-get install mercurial build-essential bison flex libopenexr-dev libtiff4-dev libpng12-dev freeglut3-dev qt4-dev-tools libxmu-dev libxi-dev libfreeimage-dev libbz2-dev

2) Make a dev directory to keep your home area tidy, and cd into it

>mkdir dev
>cd dev

3) Download cmake and uncompress it and compile as the one which downloads through apt-get is too old.

>wget .
>tar zxvf cmake-2.8.7.tar.gz
>cd cmake-2.8.7
>sudo make install

4) Download and build python 3.2 once again, because the version that comes with the distribution is too old… ****This isnt strictly required as it is only required for pylux but im including it because it works.

>tar zxvf Python-3.2.2.tar.bz2
>mkdir python32_build
>cd Python-3.2.2
>./configure –enable-shared –prefix=/home/pi/dev/python32_build
>make install

There will be a few things that look like errors, but you can ignore them

5) Get Boost v 1.47 , make a cup of tea and drink it
>tar zxvf boost_1_47_0.tar.gz
>cd boost_1_47_0

next edit the file project-config.jam with what ever text editor you like… i used emacs
>emacs project-config.jam
Go to the line that says
using python : 2.6 :/usr;

and add the following under
using python : 3.2

Be sure to use a ; at the end of the statement or it will not work. Now build boost as follows.

./bjam python=3.2 stage

Fingers crossed all targets should build *

6) Get and build / install libglew

> wget
>tar zxvf glew-1.7.0.tgz
>cd glew-1.7.0
>make install


7) Clone the luxrays repository and prepare the build with cmake

>hg clone
>cd luxrays

8) Down to the nitty gritty
Lux and LuxRays wont compile out of the box because of the use of SSE extensions of x86  arch, which ARM does not use. So we need to do some fairly invasive modifications of the code. Unfortunately at this point, me offering a list of changes is the best I can do at the moment. This gives a few limitations, SSE is used in the qbvh accelerator, so those parts of the code need to be commented and the headers removed.  If you try to make, you will get as far as the qbvhaccel and it will fail with a bunch of errors related to a missing header,  “xmmintrin.h”. Essentially all the instructions below are to remove any and all references to qbvh and mqbvh from the code. First by removing SSE compiler flags, Second by making the builder ignore the corresponding headers and cpp files and finally cleaning up any references that left in the code which stops it building. Not very clean, but, no way around it (that i know of). For most of these when I remove/comment lines, they are found by searching for qbvh… so if you comment them the lines might not match with those below, but they will give you a good idea of location.

I) emacs cmake/PlatformSpecific.cmake
Line 107 and 107, remove all references to msse, msse2 etc….
The purpose of this is so all make files wont include those compiler flags, if it still complains about msse extensions, comb that file and remove all references to -msse, -msse2 etc

II) emacs src/CMakeLists.txt
Line 30 remove mqbvh and qbvh from the SET(…..
Perform a search and remove all lines that contain qbvh, including mqbvh lines too.

III) emacs src/core/dataset.cpp
Comment line 31  and 32 (headers)
Line 51 accelType = ACCEL_QBVH change to accelType=ACCEL_BVH

Search the rest of the file and remove all further lines/sections containing qbvh or mqbvh

IV) Make
>cmake . -DBoost_INCLUDE_DIR=/home/pi/dev/boost_1_47_0 -DLUXRAYS_DISABLE_OPENCL=1

After waiting for a while (maybe an hour or so) LuxRays will compile. I have tested this on my images and I can say that it works, if you are in the luxrays head directory, you can test this by simply running


A window will open and slg will run a standard luxball scene. Performance is slow, i would recommend if you would use it to render anything, then making sure it only uses a single thread would be an advantage, it stops the ARM waisting cycles.


So at this point we dont have to go off and grab any more dependancies just grab luxrender from the repo, make modifications and build

9) Clone the repo
>hg clone
>cd lux

Make the following modifications

>emacs CMakeLists.txt
Search for and remove sse flags, specifically around line 309

>emacs cmake/liblux.cmake
Search for and remove references to the qbvh accelerator, just do a search for it and comment or remove lines (im assuming you remove the lines so the numbers below might not work)

>cmake . -DLUXRAYS_DISABLE_OPENCL=1 -DBOOST_INCLUDEDIR=/home/pi/dev/boost_1_47_0 -DPYTHON_LIBRARY=/home/pi/dev/python32_build/lib/ -DPYTHON_INCLUDE_DIR=/home/pi/dev/python32_build/include/python3.2m

If everything is well and you followed things reasonably well you should now have a working build of luxrender!

A few notes, if you try to run luxrender, you will get a complaint about boost, to get rid of this add the boost libraries to your LD_LIBRARY_PATH environment variable. Alternatively copy the libs to /usr/local/lib which should allow them to be picked up without any fuss.

I will update with some pictures of luxrays and luxrender working/scenes rendered on the Pi when i get chance.

Here is a scene with volumetric scattering at 25S/p rendered in about 12 hours

This entry was posted in LuxRays, LuxRender and tagged , , , . Bookmark the permalink.

40 Responses to LuxRender on the Raspberry Pi

  1. 3d max kursu says:

    thanks for this helpfull tutorial for the beginners like me :=)

  2. Carlos says:

    What kind of sd did you used, 4, 8, 16?

    • Mark says:

      I used an 8gb SD card, though if you build from scratch it might be necessary to clean up the sources after each build otherwise space can run low. (the main one is boost)

      • Carlos says:

        I’m also using 8 gb, but after step 1 there is no free space, not even to create the ‘dev’ folder

      • Mark says:

        The disk image probably only fills about 3 Gb of space, and the file system is only about 3 Gb in size, meaning the free space is almost zero. You need to expand the filesystem to fit the free space on the SD card. This can be done in the first boot dialogue that appears in the current firmware. The are some instructions here

        Following this to the letter will allow you to resize the partition and use all the free space on the card

      • Carlos says:

        I found out about expading the filesystem after I post here 🙂
        But now I having problems in step 4, tar zxvf Phyton-3.2.2.tar.bz2 I tried that way and many other like xjvf or something like that. Maybe the file is corrupted?

      • Mark says:

        Possibly corrupted, if indoubt, fire up X and download from python3.2, it should work…

        That said, dont actually need to use the python, it is there for completeness but is only required for the blender exporter to work… so id not worry to much about it

  3. sprocket says:

    When trying to run luxrender I get the error: cannot execute binary file. Any ideas? I tried copying the boost libs from /home/pi/dev/boost_1_47_0/stage/lib to /usr/local/lib/

    • Mark says:

      First thing to try is to just try running luxconsole it is the easiest thing to fix as normally boost is the only thing that tends to give it trouble.
      The other thing i have found is (and i am not sure if it is just me not having my system set up correctly) is once you have boost libs in /usr/local/lib/ you need to setup the environment to look in /usr/local/lib

      try export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH in your bashrc or what ever script is executed when you start a terminal… hopefully this fixes it.

      • sprocket says:

        I ran: ldd console before, and it was not finding the boost library. now it is, but its still not running. Im wondering if Ive missed something when editing CMakeLists.txt and liblux.cmake

  4. sprocket says:

    looks like i was doing it wrong :/
    I was trying to run it with: . luxconsole
    should have been: ./luxconsole

  5. sprocket says:

    so far have tried luxrender on one scene. sppm, 25 passes is the same time my i3 laptop did 1020. Not sure if I have it overclocked at the moment. I tried installing cmake through apt-get. it worked compiling luxrays, but then i ran into problems with luxrender later on because i mived up the address of the python32_build dir. (but only found my mistake after I had reverted back to 2.8.7.

    Cant thank you enough for putting this together!

    • Mark says:

      Wonderful, Yes the problem with python is that it is needed for the exporter, it isnt actually used anywhere else i dont think. pyLux is what is required for blender so you can get previews and use the exporter. Otherwise it isnt actually required.

      The speed is an interesting comparison of how irrelevant clock speeds are when thinking about different architectures and what they are designed to do. Lux really is designed for multithreaded CPUs and the ARM on the raspberry pi is nothing at all like an x86 processor. The ARM doesn’t handle multiple tasks very elegantly, at least not as elegantly as your i3. Thus the speed comparison makes the ARM look extremely slow.

      My interest in compiling lux was mainly as an experiment and a way for me to learn. One thing that this would be great at demonstrating is distributed computing in a classroom with many Raspberry Pis. Still I am highly impressed that the software works and actually gives identical results across two different architectures… while this should be the case, i found it interesting.

  6. sprocket says:

    Yes Ive noticed hang ups on the pi when running multiple programs in X. Distributed rendering is what im hoping to achive. I think the memory is the biggest limitation though. will be testing that out next. Also im wondering whats the best way to run a farm of them. Ive currently got it just running console only, so theres no overheads of running X. Im assuming its possible to telnet into it. Other idea i had is to see if i can run VNC on it

    • Mark says:

      Iv only really done distributed rendering over… all of my PCs and laptops… so thats 2 laptops 2 computers and 3 Raspberry Pi’s.

      I used one of the Pi’s as the master, and then ran SSH to each of the other systems to start the consoles and then used lux’s networking capability. I have no idea what is the most efficient method of working with it though.

      • sprocket says:

        yeah thats what I was thinking of doing the same. use the laptop as the master, and have pis as slaves. Just tried SSH and it kills the luxrender process when you close the SSH connection. VNC does work, but its a virtual display, not the same the physical display…

  7. kannaiah says:


    Just wrote few wrappers to enable build with xmmintrin.h intrinsics which tries to use arm neon instructions.
    The header is at following link

    Could you check if with this any improvement?


  8. kannaiah says:



    #ifdef __arm__
    #include “luxrays/accelerators/sse2neon.h”

  9. Mark says:

    Sorry for the very slow reply, I actually have had very little time to do any testing of the new LuxBuilds, even on an x86_64, so unfortunately it will be some time before I can get back on the Raspberry Pi…

    I will give it a go though next time! and report back many many thanks

  10. Robert says:

    I tried your steps so many times but I can’t make it work

    • Mark says:

      There have been some really big changes in luxrender, so maybe the latest version of luxrender and luxrays are not the way to go… you could try the 1.1 or 1.2 tags for luxrender and then, i am not sure exactly what luxrays version…

      The changes to lux is to reorganize the code to add SLG to lux to make some super fast rendering, for us on the ARM it makes it somewhat tricky

      hg update v11 for lux and
      hg update luxrender_v1.1 for luxrays…
      I think those would have more success

      • Robert says:

        Should I try that when clonning the repos?

      • Mark says:

        Yeah, so i would clone the repo and then grab the older version/branch… i think those commands should do that *should*

      • Robert says:

        I found and old backup image wich has luxrays without the SLG update. Everything was working fine until 44% during the make step. I got this error:

        [44%] Built target luxrays
        Linking CXX executable ../../bin/benchsimple
        ../../lib/libluxrays.a(dataset.cpp.o): In function ‘luxrays::DataSet::Preprocess()’:
        dataset.cpp:(.text+0x6c0): undefined reference to ‘luxrays::BVHAccel::BVHAccel(luxrays::Context const*, unsigned int, int, int, int, float)’
        collect2: ld returned 1 exit status
        Make[2]: ***[bin/benchsimple] error 1
        Make[1]: *** [samples/benchsimple/CMakeFiles/benchsimple.dir/all] error 2
        Make: *** [all] error 2

  11. Carlos says:

    I’m getting close to make this work, while running “cmake . -DLUXRAYS_DISABLE_OPENCL=1 -DBOOST_INCLUDEDIR=/home/pi/dev/boost_1_47_0” (didn’t run “DPYTHON_LIBRARY=/home/pi/dev/python32_build/lib/ -DPYTHON_INCLUDE_DIR=/home/pi/dev/python32_build/include/python3.2m” because I had problems with pyton) I got the fallowing error:

    Found Qt4: /usrbin/qmake/ (found suitable version “4.8.2”, requires is “4.6.0”).
    How can I install that version?

    • Mark says:

      So 4.8 should work, it is probable that the error is somewhere else, or that the error call from cmake is actually incorrect. try running the command again. if it gives you an error, then delete the CMakeCache file and try it again.

      • Carlos says:

        Lux has built, it took around 7-9 hours. Now is time to do some rendering, thank you so much.

      • Mark says:

        That is great! I must warn you it is very slow, but its good to hear it worked… out of interest which version of lux and luxrays did you use?

    • Carlos says:

      For Lux I downloaded V11 and V1.1 for Luxrays. Actually, while I was fallowing your guide I capture come screenshots, and also I found out about a few mistakes, for example you said you used “tar zxvf boost_1_47_0.tar.gz” but it doesn’t work, so I tried many others until one worked. If you want to I can update your tutorial the way it worked for me and add some screenshots.

      • Mark says:

        Oh yes I have been meeting to fix those, it is because of the way i copied and pasted into wordpress, it sometimes changes the odd thing here and there. especially the cmake commands.

        Many thanks for the update!

      • Carlos says:

        I’ll let you not whet it’s ready.

      • Carlos says:

        I ment “I’ll let you know when it’s ready” haha

  12. Carlos says:

    Hey Mark sorry to bother you again, do you know where I can find some .lxs files to run some tests?

    • Mark says:

      Hi Carlos

      There should be a few dotted around various parts of the LuxRender Forum, however if like me you have a version 1 R_Pi, with only 256mb ram, rendering complex scenes can be difficult. Here is an example scene that should fit into a 240/16 cut (think thats the max, either that or its 224/32)

  13. Carlos says:

    Thank you! I downloaded Luxrender for windows, installed it and now I’m rendering one of two examples that came with the installation. I’ll try your file later when the other file finishes rendering, hope it doesn’t take to much hahaha. And I have the 512 mb ram Pi, lets see how this improvement works!

    • Mark says:

      Great – the Raspberry Pi will be a hell of alot slower than the PC, but it is quite neat that the code does compile and give exactly the same results on two completely different bits of hardware. It is also i think a great example of how to demonstrate distributed computing as you can have a PC/Pi running as master and many running as network slaves… I have 3 Pis and left a scene running for a week while i was away, looked pretty good by the time it had finished.

      Also Lux being physically based means it is quite slow, but it can be used to simulate quite accurate optics tests because of it 🙂

  14. Carlos says:

    Actually that is my next step, distributed rendering, as a final collage project I doing a research in cheap/affordable distributed computing.

  15. Carlos says:

    So this is the output after 24 hours with the new 512 mb Model B

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s