My Musical Raspberry Pi setup has a lot of moving parts, which for me is fine. I recently built my Dad a version of a Musical Raspberry Pi as he was interested in my description of the one I built. The main complication in my setup is not keeping the music on the Raspberry Pi itself, but pulling it from my NAS via NFS.

Everything together

Luckily SDHC cards come in amazingly large sizes these days and seem to mostly work with the Raspberry Pi. I settled on a 32GB card, so the plan was to set aside almost all of that space for music and simply setup all the apps (mpd, mediatomb, samba) to look at this folder.

I ended up setting aside 2.5GB for the main / partition and assigning the remaining space (28GB or thereabouts) for media storage.

The final solution supports:

  • HiFi quality output via a USB DAC
  • music folder shared via samba with no authentication to keep things simple
  • mpd with inotify watching of the music folder
  • mediatomb for UPnP serving of the music folder
  • ShairPort for AirTunes support
  • gmediarender-ressurect for UPnP render support
  • Internet radio by exposing the playlists folder of mpd and simply putting in a few appropriate m3u files.


With DHCP and Samba, UPnP and all other broadcast based networking – provided you can get a device onto your network it simply shows up where it should. One irritant here is that Android still doesn’t support ZeroConf, so setting up MPDroid on Android still requires know about IP addresses. In fact it’s worse than that – to make it useful you have to setup a static MAC/IP address mapping on your router. Hardly user friendly, but thankfully a one off.

There does seem to be a decent library for building ZeroConf support into apps – JmDNS – and MPDroid is open source, so a potential project in the making.

If the Raspberry Pi is going to be connected via Ethernet things are pretty simple and once all the software is setup everything works well.

Easy WiFi

If you want to use WiFi things become harder. Running the Raspberry Pi headless means editing files via SSH to get the WiFi setup, the goal here is to make this easier and accessible to less technically minded users.

One option would of course be to use the full graphical interface and rely on standard GUI tools, but this means getting the Raspberry Pi setup is a completely different way purely to get a SSID and passphrase for the WiFi network.

Windows Problems

The initial plan was to setup an extra FAT32 partition for holding config data, such as WiFi settings (as per the Raspberry Pi config.txt in /boot) and simply allow these to be edited from a Windows machine with a card reader.

Unfortunately Windows only ever gives you access to the first partition on any SD cards or USB devices, which is /boot for the Raspberry Pi. It appears there are workarounds for this, but they definitely are well outside the realms of keeping things simple.

A Simple Solution

So a new route forward is to work with the single FAT32 partition we already have and pollute /boot with an additional file. There is a danger here with getting people to edit files in /boot – the potential for pain is large, but for now it gives us a nice way to setup WiFi on the Raspberry Pi for non-technical users.

The solution involves a few bits:

  • /boot/wifi.txt
  • /root/interfaces.tmpl
  • /root/
  • rc.local entry to invoke /root/ on boot

The script checks the timestamp of wifi.txt and then uses sed to replace the relevant portions of interfaces.tmpl to produce a new /etc/network/interfaces file.


wpa-ssid "NetworkSSID"
wpa-psk "PSK"


auto lo

iface lo inet loopback

allow-hotplug eth0
iface eth0 inet dhcp

allow-hotplug wlan0
auto wlan0
iface wlan0 inet dhcp
#wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf


iface default inet dhcp



if [ -e /boot/wifi.txt ]

    if [ /boot/wifi.txt -nt /etc/network/interfaces ]
        ssid="$(dos2unix < /boot/wifi.txt | grep ssid)"
        psk="$(dos2unix < /boot/wifi.txt | grep psk)"

        cp /root/interfaces.tmpl /etc/network/interfaces    
        sed -i -e "s/{{wpa-ssid}}/$ssid/" /etc/network/interfaces 
        sed -i -e "s/{{wpa-psk}}/$psk/" /etc/network/interfaces 


I’ll happily put my hands up at this point and admit this is a fragile solution that has the potential to go wrong easily. Changing WiFi settings is not something most people will do very often, so it’s good enough for now. That said I’m keen to keep exploring the issues raised by this little project to see what more complete solutions may be possible.

The Musical Raspberry Pi has been working a treat! I was planning on upgrading to a more HiFi case, but to be honest I like the look of the exposed PCB in the clear case the Pi came with!

This is a quick post following up on a few things learnt since getting my musical Raspberry Pi up and running. What was working:

What wasn’t working:

  • Audio redirection from Windows
  • Proper init scripts for ShairPort and gmediarender-ressurect
    Since then I’ve found Airfoil for audio redirection under Windows.

Day to day

It’s been nice and smooth running since getting up and running – the WAF has been high so far! We’ve been using the speakers almost daily, partly novelty I’m sure, but partly because they’re there and it’s easy and hassle free to use them.

Works well most of the time, but a little clunky at times. I think this may be a good reason for me to get my hands dirty with some Android coding.
Works very nicely and just feels a lot more polished in comparison (as ever :().

init script problems

I wasn’t happy with the rc.local approach for starting up ShairPort and gmediarender-ressurect. There were a few different avenues I had to explore before I managed to get things sorted out.

Soundcard order

A lot of the other posts on the web talk about changing the alsa module index so that the usb audio is used as the preferred device.

Done by changing /etc/modprode.d/alsa.conf to look like:

#options snd-usb-audio index=-2
options snd_bcm2835 index=-2

For me this didn’t change anything, the rc.local approach was still the only one that worked.


In an attempt to track down the problem it turned out trying either aplay or paplay as root both failed with various permission denied problems.

The quick fix here is to add the root user to the pulse-access group:

root@raspberrypi:~# groups root
root : root indiecity
root@raspberrypi:~# usermod -G root,indiecity,pulse-access

With this problem resolved the init script problems all went away.

This also explains why other people weren’t having issues with the ShairPort init script, it was the use of PulseAudio that was root (ahem!) cause.

Working init scripts

The supplied init script for ShairPort now works. To install it:

pi@raspberrypi ~ $ sudo cp src/shairport/shairport.init.sample /etc/init.d/shairport
pi@raspberrypi ~ $ sudo insserv shairport

I tweaked the DAEMON_ARGS line to set the instance name as to use PulseAudio:

DAEMON_ARGS="-w $PIDFILE -a air-pi --ao_driver=pulse"

I’ve also knocked together an init script for gmediarender-ressurect based on the ShairPort one that works well:

# This starts and stops gmediarender-ressurect
# Provides:          gmediarender
# Required-Start:    $network
# Required-Stop:
# Short-Description: gmediarender - UPNP renderer
# Description:       UPNP renderer!
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6

# Source function library.
. /lib/lsb/init-functions

DAEMON_ARGS="-P $PIDFILE -d -f upnp-pi --gstout-audiosink=pulsesink"

[ -x $binary ] || exit 0


start() {
    echo -n "Starting gmediarender: "
    start-stop-daemon --start --quiet --pidfile "$PIDFILE" \
                      --exec "$DAEMON" --oknodo -- $DAEMON_ARGS
    log_end_msg $?

stop() {
    echo -n "Shutting down gmediarender: "
    start-stop-daemon --stop --quiet --pidfile "$PIDFILE" \
                --retry 1 --oknodo
    log_end_msg $?

restart() {
    sleep 1

case "$1" in
        status gmediarender
        echo "Usage: $0 {start|stop|status|restart}"

exit 0

Installing it is done in exactly the same way as for the ShairPort one. If you are not using PulseAudio take out --gstout-audiosink=pulsesink option on the DAEMON_ARGS line.

When I built my musical Pi one of the loose ends was being able to stream audio from Windows to the speakers. The quick answer to this is Airfoil, it worked first time and appears to stream reliably over Wifi to shairport running on the Pi.

In fact the streaming seems more reliable than PulseAudio does out of the box. When streaming via PulseAudio I get occasional drop-outs in the audio, which I’ve yet to experience with the Airfoil/shairport combination. I’ve not yet spent any time playing with the settings for PulseAudio, so there are certainly some parameters to tweak that may help. RTFM!


There are several bits and pieces of open source code out there:

Most of them either crashed repeatedly or didn’t do anything useful first time around.


I’ve not been a particularly active Windows user at home for quite a few years now, but there is a sound card feature called Stereo Mix.  This is a capture device that gives you direct access to the audio currently being played.

The problem with this is that you can still hear the audio on the laptop and the local volume affects the level of the signal that is sent.

I’d like to revisit this at some point, maybe with a view to using PulseAudio rather than AirTunes streaming. My initial thoughts on how to proceed were to write a sound card driver to capture the audio, but Airfoil appears to do something different. Airfoil can be configured to capture audio on a app by app basis, which is very useful – music goes to the hifi and notification sounds don’t.

A rainy day project!

I finally got my hands on a Raspberry Pi and my first project has been to make a good quality music player. I’ve not had a speaker setup to listen to music on for many years – it’s almost exclusively been on headphones, mostly on my phone, for far too many years now.

There are a lot of people out there doing similar things, and thanks to all for providing various hints and tips to get me going:

Most of these setups are focussing on a single audio source. I was after a more complex setup.

My goal for this project was to be able to chuck audio at the Raspberry Pi from:

  • Android phones
  • iOS devices
  • MPD for standalone playback

Using PulseAudio has allowed me to achieve all of these goals as well as providing a few additional benefits.


The on-board sound out of the Raspberry Pi is not intended to be high quality and for me wasn’t even close to reasonable for a HiFi experience. The Raspberry Pi has two USB ports and Raspbian has USB audio support available in the kernel out of the box.

There are a lot of USB DACs available to suit all budgets and quality requirements. I ended up with a HiFimeDIY Sabre.

I wasn’t in the market for something audiophile, but I can certainly hear the difference in audio quality compared to the audio jack on my laptop.

Testing the DAC

The first step after plugging the DAC in is to try and play something. You can list the available audio devices:

pi@raspberrypi ~ $ aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: ALSA [bcm2835 ALSA], device 0: bcm2835 ALSA [bcm2835 ALSA]
  Subdevices: 8/8
  Subdevice #0: subdevice #0
  Subdevice #1: subdevice #1
  Subdevice #2: subdevice #2
  Subdevice #3: subdevice #3
  Subdevice #4: subdevice #4
  Subdevice #5: subdevice #5
  Subdevice #6: subdevice #6
  Subdevice #7: subdevice #7
card 1: DAC [HiFimeDIY DAC], device 0: USB Audio [USB Audio]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
card 1: DAC [HiFimeDIY DAC], device 1: USB Audio [USB Audio #1]
  Subdevices: 1/1
  Subdevice #0: subdevice #0

Playing something through the DAC is simply a case of telling aplay which device to use:

aplay -D plughw:1,0 /usr/share/sounds/alsa/Front_Left.wav
aplay -D plughw:1,0 /usr/share/sounds/alsa/Front_Right.wav
aplay -D plughw:1,0 /usr/share/sounds/alsa/Front_Center.wav

Low Latency nrpacks=1

There is a kernel module option to reduce the latency and help increase the quality of the USB audio output. I’ve enabled it to no noticeable effect other than to remove some error messages that were showing up rather frequently in the kernel log.

The guys over at The Raspyfi Project have made a rather extensive list of optimisations they’ve made to the stock Raspbian image.


Using ALSA directly is nice and simple and should simply work everywhere if aplay worked above.

Unfortunately only a single device can use ALSA at a time, so for the audio nirvana we’re aiming for we need to make use of a sound server. There have been a lot of different sound servers over the years in Linux, one of the more recent ones is PulseAudio. It has its fans and detractors, but is generally well supported in modern distributions with Raspbian being no exception.


PulseAudio as with everything else sound related in Linux seems to be sporadically documented and there is also a lot of out of date information floating around. Thankfully the distribution maintainers do a generally fantastic job of pulling everything together.

System Wide PulseAudio

The only way I could get clean, glitch free playback with PulseAudio with the USB DAC was to run the PulseAudio daemon as root. The documentation for PulseAudio goes to great lengths to explain why you shouldn’t run a system wide instance of PulseAudio.

Running PulseAudio as the pi user resulted in continual and very painful to listen to clicking in the output.

For this project I am running the Raspberry Pi headless with no X11 running so this is the only sane way to get PulseAudio up and running and we do fall under the one acceptable use case of running system wide PulseAudio – that of an embedded system with no real users.

Raspbian ships with an init script already supplied and to enable it you need to edit /etc/default/pulseaudio and change the appropriate line:


I also added the pi and mpd users to the pulse-access group to allow them access.

root@raspberrypi:~# adduser pi pulse-access
root@raspberrypi:~# adduser mpd pulse-access
root@raspberrypi:~# groups mpd
mpd : audio pulse-access
root@raspberrypi:~# groups pi
pi : pi adm dialout cdrom sudo audio video plugdev games users netdev input indiecity pulse-access

Pulse Resampling

Older versions of pulse used a fixed sampling rate and would re-sample on the fly. With the release of v2.0 this has been changed. If the output device supports multiple sample rates then they will be switched on the fly.

You can get a list of available sample rates:

pi@raspberrypi ~ $ cat /proc/asound/card1/stream0 
HiFimeDIY Audio HiFimeDIY DAC at usb-bcm2708_usb-1.3, full speed : USB Audio

  Status: Stop
  Interface 3
    Altset 1
    Format: S16_LE
    Channels: 2
    Endpoint: 3 OUT (ADAPTIVE)
    Rates: 8000, 16000, 32000, 44100, 48000, 96000
  Interface 3
    Altset 2
    Format: S24_3LE
    Channels: 2
    Endpoint: 3 OUT (ADAPTIVE)
    Rates: 8000, 16000, 32000, 44100, 48000, 96000


You can verify that PulseAudio is indeed switching on the fly with a couple of simple steps. Play a file with a sample rate of say 44.1 Khz and whilst it is playing:

pi@raspberrypi ~ $ cat /proc/asound/card1/stream0 
HiFimeDIY Audio HiFimeDIY DAC at usb-bcm2708_usb-1.3, full speed : USB Audio

  Status: Running
    Interface = 3
    Altset = 1
    Packet Size = 388
    Momentary freq = 44100 Hz (0x2c.199a)
  Interface 3
    Altset 1
    Format: S16_LE
    Channels: 2
    Endpoint: 3 OUT (ADAPTIVE)
    Rates: 8000, 16000, 32000, 44100, 48000, 96000
  Interface 3
    Altset 2
    Format: S24_3LE
    Channels: 2
    Endpoint: 3 OUT (ADAPTIVE)
    Rates: 8000, 16000, 32000, 44100, 48000, 96000


In the first few lines (Momentary freq =) you can see the sample rate that is actually being used by the hardware.

There are almost certainly other ways to verify this and query all the properties of audio devices, but this seems to do the job!


To cut down on the wires going to the system I opted to get a WiFi USB dongle. It’s a low profile, low power adapter the same size as the micro Bluetooth dongles that have been around for a while now.

The chipset is a RTL8188CUS and with the latest Raspbian image simply worked out of the box.

pi@raspberrypi ~ $ sudo lsusb | grep Real
Bus 001 Device 004: ID 0bda:8176 Realtek Semiconductor Corp. RTL8188CUS 802.11n WLAN Adapter

There are plenty of guides on the web on how to configure wireless USB dongles for the Raspberry Pi.

AutoFS and NFS

My audio is all stored on a NAS which exposes it via Samba and NFS.

I use AutoFS to make mounting the networked filesystem nice and easy. A few packages are needed nfs-common, rpcbind and autofs5.

Edit /etc/audo.master and uncomment the /net line:

/net    -hosts

After starting up the autofs daemon you should be able to simply browse your network shares under /net/machine for example mine is under /net/plug.local.


MPD was a requirement for this project as it allows music to be queued up from a variety of sources and simply left to play.

Getting MPD up and running was easy and simply a case of telling it to use PulseAudio. Edit /etc/mpd.conf and set your audio_output section like so:

audio_output {
        type            "pulse"
        name            "MPD PulseAudio Output"

Setting the music_directory option to point to my AutoFS NAS mount directory was the only other change necessary to /etc/mpd.conf.


With the basic audio output working the next step is a UPnP renderer so that I can get my Android phone in on the audio action. There are a few different UPnP renderer options available for Linux, but the easiest to get up and running I found to be gmediarender-ressurect. A few other people have got this up and going on the Raspberry Pi.

I’ll hand over to the official documentation:

And a couple of useful posts:

gmediarender-ressurect uses GStreamer to output it’s audio, which doesn’t default to using PulseAudio. Thankfully it allows the audio output sink to be controlled from the command line:

/usr/local/bin/gmediarender -f upnp-pi --gstout-audiosink

Automatic Startup of gmediarender-ressurect

UPDATE – solved, see Musical Pi Follow Up

Using PulseAudio made getting automatic startup working a bit tricky. I had a couple of attempts at creating an init.d script, which were partially successful. gmediarender would start successfully, but would insist on routing audio out via the onboard audio of the Raspberry Pi rather than the USB DAC.

My solution was to simply start it up via a line in /etc/rc.local:

su -c "/usr/local/bin/gmediarender -f upnp-pi -d --gstout-audiosink pulsesink" pi

Simply, easy and does the trick.

GStreamer Default Sink

You can set GStreamer to use PulseAudio as its default sink with a quick GConf change:

gconftool-2 -t string --set /system/gstreamer/0.10/default/audiosink pulsesink

As previously mentioned though, this wasn’t enough for me to get everything working on boot. The /etc/rc.local approach seems to be the most reliable so far.


As there are also fruity products around in our household AirPlay support was definitely on the list of things to get working. Enter ShairPort!

Again I’ll hand over to others at this point:

Automatic Startup of ShairPort

UPDATE – solved, see Musical Pi Follow Up

Again I had a few problems getting the supplied init.d script to work. It is almost certainly something to do with using PulseAudio, but again a simple one liner in /etc/rc.local does the job:

su -c "/usr/local/bin/ --a air-pi -d --ao_driver=pulse" pi

Networked PulseAudio Speakers

One benefit of using PulseAudio is that you can make use of PulseAudio network streaming for next to no effort. This allows me to route all the audio output from my laptop to a decent set of speakers via the Raspberry Pi, great for things like Spotify or anything else that can’t be run directly on the Raspberry Pi.

There are a few bits and pieces necessary to get this working.

Enable receiving and advertising networked audio on the Raspberry Pi by adding the following to /etc/pulse/

load-module module-native-protocol-tcp auth-ip-acl=;
load-module module-zeroconf-publish

Obviously change the ip address ranges to suit and you’ll need to install pulseaudio-module-zeroconf.

Finally you need to allow PulseAudio to make use of the networked speakers on your other machines. Fire up paprefs on a machine with PulseAudio installed and check the “Make discoverable PulseAudio sound devices available locally”. You probably have to log out and back in again to have this option take affect.

I had to install pulseaudio-utils to get the paprefs application.

On my Ubuntu 12.04 laptop in the “Sound Settings” applet I now get the option to choose “HiFimeDIY DAC Analog Stereo on pulse@raspberrypi.local” as a destination for my sound. Lovely!


I get occasional niggles with the wireless – it feels like the dongle goes to sleep sometimes and takes a while to respond after periods of inactivity. That and occasional signal problems make me think it may be worth considering going down the wired ethernet route. That said the problems seem transient and infrequent enough to be ignored and in general the bandwidth is stable enough to stream FLAC and networked PulseAudio without issue.


One final thing I’d love to get working is getting audio from Windows through to the Raspberry Pi in the same way that PulseAudio lets me do with Linux. There are a few options floating around there on the web including AirFoil which works with the AirPlay protocol, but I’ve not experimented yet.

All in all though it’s been a fun project and I’m enjoying having my music out in the open and not trapped in my headphones!

UPnP always stuck me as a good idea, but until recently it’s never been something that has made it into my day to day media setup. That was until a colleague showed me BubbleUPnP – a very nice little Android app that seemed to provide the piece, that for me, had been missing.

No Configuration

The fact that everything simply finds each other via broadcast messages is a big part of what makes it so appealing. No configuration, no messing around with IP addresses and fiddling with static DHCP allocations or even worse hard coding IP addresses into machine configurations. Turn it on and it’s available.

I think the key to making it all work is having a decent remote control for all things UPnP. Something that exploits the touch capabilities of smartphones makes perfect sense and it’s generally always to hand when you want it.

Anything Anywhere

I’m a big fan of the idea of being able to take music from any device that exposes content to any device capable of playing it. The ability to play music from my phone, laptop, dedicated NAS or anything else capable of handling UPnP is how it should be.

Most new smart televisions seem to show up as a UPnP renderer albeit they tend to be slightly limited in the range of codecs they accept. That of course can be dealt with my having a UPnP server that transcodes, but then you are back into the world of configurations and setups.


My NAS of choice is currently a TonidoPlug 2 running ArchLinux. MediaTomb simply installs and runs without any hassle and configuration was obvious and simple.

I like it’s no nonsense web GUI that just gets the job done and it’s resource usage seems minimal. Should I wish to it also has full support for transcoding to support less flexible or less powerful UPNP render devices, but the TonidoPlug certainly wouldn’t cope with video transcoding.

XBMC fully supports UPnP and can be both a source of content and a renderer. It is also (as ever) very flexible in terms of the codecs and formats it supports, so it is my destination of choice compared with playing to the TV directly.

As my original post on my DIY DVI to SCART cable is one of the most popular posts to date I thought I would expand a bit more on the details of making the cable. I did have some photos of the final product, but an unfortunate sequence of events occurred:

  • I took the photos on my mobile
  • I upgraded the firmware with a CyanogenMod build, which wipes the phone
  • I bought a new television and moved home so threw out the DVI to SCART cable I’d built as I could now use HDMI
    You could argue that is a fairly long winded sequence of events, but that’s normally how they go! Short answer decent, automated backups. I now let Google+ Instant Upload run and make sure I’ve got backups of all my photos as and when I take them.

Anyway, back to the DVI to SCART lead …

Parts List

My previous post contains more details about why everything is wired up the way it is. This post is just to give more detail about how the cable was put together.

Parts list:

  • DVI to VGA cable
  • SCART plug
  • Resistors – 3.3kΩ, 1.2kΩ, 820Ω, 270Ω, 220Ω, 68Ω
  • BC548B NPN transistor
  • LM317 voltage stepper in a TO-92 case
  • Breadboard
  • Electrical tape
  • Small Plastic Enclosure – I used one from Maplin
  • Connecting wires
    Continue reading
Posted in diy.

I took the plunge and upgraded my TV box to Ubuntu 12.04 from 10.04. I opted to do a complete re-install rather than upgrade. I do realise 12.10 is now out, but this post has been sat around needing a tidy up for a while 😉

Getting the remote working last time wasn’t entirely straightforward and hasn’t really improved. I didn’t take any notes last time around either, so I’ve kept notes this time and hope they are of use to someone else. I think half the problem is due to being a cheapskate and getting a generic mce remote rather than the official one!

The Remote

I’m using this one from Maplin.

The USB IR receiver that came with the remote is reported as being a Formosa21 eHome Infrared Transceiver with Device ID 147a:e03e:

Bus 005 Device 002: ID 147a:e03e Formosa Industrial Computing, Inc. Infrared Receiver [IR605A/Q]

Initial Results

With the 12.04 kernel and standard version of lirc the remote is successfully identified and when running irw it spits out button codes for most of the buttons on the remote. Some of the buttons are not recognised, but more disappointingly out of the box XBMC doesn’t respond to the remote at all.

So there are two problems that we need to deal with here:

  • Updating the lirc config file for the additional buttons
  • Getting XBMX to recognise the remote

Continue reading

Having got ArchLinux up and running on the TonidoPlug 2 it was a trivial matter in the extreme to get a functioning MythTV backed up and running. Installing the packages was straightforward and simply done via the standard package manager – pacman.


The only minor problem with the setup and configuration is how on earth you run the setup without a display. Thankfully it’s easy and you can simply rely on X11 forwarding.

Simply ssh over to the box with the -x option enabled and get the GUI locally. If you’re trying to do this under Windows you should be able to achieve the same results with putty and Xming.


Recording a single SD show results in an ~30% CPU usage load, which I am more than happy with for now. I’ve not ventured into trying to record HD signals yet so the plug may not be up to the job, but we shall see.

Continue reading

A while ago the KuroBox got replaced with a Atom based Zotac machine, which is performing fantastically as
a XBMC machine. I still wanted something really low power to leave on all the time as a network storage/media storage device.

After some reading I settled on getting a TonidoPlug 2 – it’s barely bigger than a 2.5″ drive, draws 1.2 watts of power and it’s a full ARM based Linux machine! The perfect home server and hopefully plenty of scope for poking around and experimenting.


There are a few goals for this box:

  1. Always on media server
  2. NAS for backups and other general file sharing
  3. MythTV backend if possible

Out of the box

Out of the box pretty much all of the goals listed above are achievable, but everything including the kernel was a bit out of date. For general file sharing this doesn’t present too much of a headache, but for MythTV I’ve found it’s generally better to be able to keep up to date. After a few attempts at compiling my own kernels (a project I intend to revisit soon) I found that ArchLinux had recently announced official support for the Tonodio Plug 2. So ArchLinux step up to the stage –!

Installation went like a breeze and it was simply case case of following through the documented install process. In no time at all I was up and running in a shiny new stock ArchLinux install.


I’ve not used ArchLinux before, but everything seems pretty familiar. The notes on upgrading packages (here) scare me slightly and remind me far too much of my experiences with Gentoo, but I’ll reserve judgement until I’ve done a few system upgrades as and when necessary.


Getting MythTV up and running was pretty simply and has been running pretty well, bar one or two very small niggles. I’ve dedicated a separate post to my experiences.

8-bit Data Processing

Getting good performance from CUDA with 8-bit per pixel image data can at first glance be awkward due to the memory access rules – at a minimum you should be reading and writing 32-bit values. With a bit of planning it is actually pretty simple.

Figuring all of this out has helped me get to grips with optimising CUDA code and understand the performance counters provided by the CUDA driver and tools. I’m going to put together a series of posts going over the various problems and solutions I’ve found to dealing with 8-bit data.

CUDA Optimisation

Optimising code for CUDA can be tricky and the biggest challenge is getting to grips with the memory access rules.

Following the adage of keep it simple stupid I was working on a very simple function trying to get to grips with the NVidia performance tools. I came across some results that seemed counter intuitive, but on closer examination of the ptx code it turns out the compiler was optimising in a way I wasn’t expecting. Something to bear in mind if you’re having trouble sorting out memory access problems.


Various YUV pixel formats come up frequently when working with both live video and movie files. YUV formats generally break down into two types – packed and planar. Packed is the easiest to work with as it’s the most similar to RGB in that the data for each of the colour channels is packed together to form pixels. In planar formats the different colour components are split up into separate buffers. There’s a good summary of the various formats over at

This function extracts the luminance signal from the packed signal, which in this case is in UYVY format.

This code is written to operate on an HD frame, so 1920×1080 pixels in size and runs with a block width of 32 threads. Each thread processes 2 UYVY pixels, so each thread block processes 64 pixels, which rather handily happens to be an exact multiple of 1920 i.e. we can conveniently ignore boundary conditions here 🙂

Continue reading