The state of FireWire audio interfaces support on Linux
One of the ways to define, whether Linux is ready for professionals, is how well certain hardware works. Let's talk about FireWire audio interfaces.
Let's start with the most predictable question.
Why is there no official support for Linux from vendors?
Simply put, it's not worth it to have dedicated staff in a company that produces professional audio interfaces.
The FFADO project we'll talk about later has ways to measure amount of its users: the first time a FFADO mixer runs in the system, it collects data and sends it to the team. Here is a plot for all devices ever used on Linux with FFADO:
Amount of FFADO users since 2008
That steady growth, with 3628 unique registered devices at the moment of publishing the article, still doesn't justifies expenses.
Daniel Wagner started working on Linux drivers for BridgeCo FireWire audio interfaces in 2004 when he joined the company. The project was called FreeBoB, because it focused on just BridgeCo's BeBoB chips.
A while later Pieter Palmers joined him to work on streaming, and at some point FreeBoB became FFADO — Free FireWire Audio Drivers. FFADO is now beyond the 2.0 stage, supports over 60 devices from ECHO, Focusrite, M-Audio, Terratec and more vendors thanks to a certain understanding between the developers and the vendors that was reached in 2009. The level of support for them ranges from experimental to rock solid.
But here is an interesting thing: the drivers are not part of ALSA — the current Linux audio subsystem. You've probably heard about the convoluted state of audio in Linux before. Well, FireWire audio interfaces are only available when you are running the JACK sound server. Which is more or less OK for professional users, because it's what they use anyway.
FFADO uses D-BUS for communication between the FFADO core and the graphical mixer/control interface. It also ships its own advanced mixer that provides device-dependent user interface, e.g. a matrix in case of DICE-based soundcards such as the Focusrite Saffire series.
FFADO mixer for Echo Audiofire 2, screenshot courtesy by Artem Popov
And yet, is there a particular reason for this complication? And is there any way ALSA could start supporting FireWire soundcards? That would be a “yes” to both questions. Let's start with the latter.
A while ago Clemens Ladisch started working on FireWire drivers in ALSA. LGW contacted him and asked several questions about the ongoing work and the outlook. We also contacted FFADO developers for insights on the past of the project, the ALSA activity and possibilities of collaboration between the two projects.
ALSA, Clemens Ladisch
Clemens is active in the linux audio development community since about 2000. However due to lack of time, his involvement is pretty much concentrated on ALSA itself.
I've seen a couple of patches from you in FFADO a while back. What are the relations between the two of your projects? Do you share knowledge? Reuse code?
Direct code reuse is not really possible because of the language differences: FFADO uses C++, while the Linux kernel is written in plain C with some additional restrictions. Also, both the interfaces to the FireWire bus (libraw1393/Juju) and to the sound system (Jack/ALSA) are different. Furthermore, I'm using a completely different algorithm for clock synchronization. However, we do share information about hardware and interface details.
Speaking of which, do you have access to actual devices, or do you rely on FFADO code, or publicly available specs or the specs that some vendors gave to FFADO developers?
I own two devices which together cover three interfaces (DICE, Fireworks, and AV/C). But to be able to support the multitude of devices out there, I have to rely on information from FFADO and on documentation, as far as it is available. (Documentation doesn't always exist, but some vendors show us their firmware source code.)
What's the current status of the FireWire@ALSA project? What could users expect from ALSA with regards to FireWire, say, this year?
At the moment, the released Linux kernel contains complete drivers for several devices that are not supported by FFADO (iSight microphone, FireWave Surround, and LaCie FireWire Speakers).
Software is much less of an issue on Linux these days: Ardour as a free DAW is quite capable
I'm currently hacking on a driver for devices based on the widely-used DICE chip. At this stage, it works only for playback, but I expect to support capturing, clock source selection, and synchronization of multiple devices soon. This year, there should also be drivers for Fireworks- and probably AV/C (BeBoB)-based devices, which would cover all devices except those from M-Audio, RME, and Yamaha.
Is FireWire just another driver for ALSA, or do you have to make substantial changes in the core?
FireWire devices behave like any other sound device; I did not need to make any changes to the ALSA framework. However, the FireWire core is relatively new and untested; I did make some minor modifications and optimizations along the way, and am planning to do some more.
What kind of development efforts from other people make sense at this point to improve the driver?
With the ongoing implementation of new features, the driver is something of a moving target at the moment. Furthermore, there are not many other people who can do Linux kernel FireWire development, and Stefan Richter doesn't have much time either, so improvements are pretty much dependent on how much free time I can find.
The team has seen a lot of changes over time. David Wagner and Pieter Palmers are no longer active. A lot of work is currently done by Jonathan Woithe, Adrian Knoth and Arnold Krille, and most recently Philippe Carriere contributed a lot of code to the Focusrite driver.
Back in 2009-2010, how did it feel to get support from most device vendors? Was it like the shifting of tectonic plates? :)
Jonathan Woithe: It depended very much on the vendor. There was significant cooperation from the likes of Focusrite, Stanton, Echo, ESI, Terratec (now Musonic) and Mackie. From what I can tell, these companies quickly recognised the fact that the information we required did not amount to their complete “intellectual property” and were happy to oblige with the programming information.
Conversely, other companies still don't get this despite many attempts at explaining the situation to them. So when detailing with these, it is a bit like shifting tectonic plates, as you so neatly put it. :-)
Harrison Consoles is one of the few hardware/software companies that “got it” and succeeded with Mixbus, an Ardour-based DAW
It should however be noted that there is another aspect to all this: in many cases, the engineers who have the knowledge to realise that the support we require is not detrimental to the company are completely isolated from the public-facing side of the company. Instead, the only point of contact users (or potential purchasers) have are sales people who are told to completely block ALL requests for technical information — so the requests never make it far enough down the chain for them to be properly evaluated.
Getting past this roadblock was — and still is — a challenge. MOTU are the classic example of this (and it's even more complicated in their case because they outsource the development, so it's unlikely that there's anyone at MOTU who can evaluate our request properly). After more than 5 years we are still no closer to an engineering contact who could assist FFADO greatly by answering a few very simple questions about their interfaces.
The situation with RME and their Fireface interfaces was similar. However, back in 2010 a fortunate series of events meant that we were able to get past the roadblock, with the end result that FFADO now supports both the Firefire 400 and 800 devices. So it's not all gloom and doom, but progress can be frustratingly slow at times.
What does your interaction with vendors look like today?
Jonathan: I'm personally only in contact with one vendor — RME — and that's fairly sporadic mainly because I have most of the information I require to support the Fireface 400/800
After the initial provision of information, the interaction dropped back to an "email on an as-needs basis" approach which worked well for me. In the near future this may pick up again as enquiries start on the possibility of supporting their newer interfaces.
How would you characterize development of FFADO since the release of v2.0? What's the focus?
Jonathan: It's been mixed, unfortunately. We've had times of rapid development followed by long dormant periods. There have been several underlying causes of this. Firstly, there are only about 3-4 core developers and none of us are paid to work on FFADO — it is an entirely volunteer-based project. Related to that, during 2011 a number of us had other things turn up which meant there was very little time to work on FFADO — so little progress (beyond trivial bug-fixes) was made.
Developers involvement dynamics, courtesy by Ohloh
In terms of features, since the 2.0 release we've added out-of-the-box support for more devices and more variants of previously supported devices. Support for interfaces which use the DICE chipset (which include most Focusrite units) has been added, as has been support for the RME Fireface 400/800 interfaces. In addition, important bugs have been fixed and performance improved for most interfaces.
Source code commits dynamics, courtesy by Ohloh
2012 is looking better for me personally so I've been trying to pick up the pace and get a new release happening — so this is probably the main focus right at the moment. Bug reports are one area which has been neglected for too long (not out of carelessness, but simply due to a lack of resources), and before rolling the next release we want to get on top of those.
What do you think are the main missing parts of FFADO?
Jonathan: Support for more devices is what it comes down to I think. It would also be good to support the DSP components of the newer focusrite devices which are equipped with onboard DSP, but this requires information which Focusrite has not yet provided (under FFADO these devices currently act as if the DSP wasn't there).
Pieter: It would be cool if we were able to provide support for the peer-to-peer network structure that is inherent to the 1394 bus. E.g. most devices can be configured to stream directly to each other, which can open up interesting applications in e.g. live situations.
What was the reason #1 you didn't start the project as part of ALSA?
Daniel: The were several reason why we didn't chose to write the streaming part within ALSA. First, the libraw1394 provided an API for streaming. We anticipated that adding FireWire streaming part into the kernel would meet strong resistance. Another reason was that programming in user space is a lot simpler.
As it turns out user space streaming is not so simple, especially with multi device setups. Synching up different audio streams is difficult too.
At this point we have very good arguments why the streaming should be inside the kernel (as we have a proof of concept with FFADO). For example Stefan Richter as ieee1394 subsystem maintainer is supporting this and that is something very good.
Pieter: Also, there already was a JACK backend that did iec61883 streaming and that we could use to start from. Besides, programming in user-space is a lot more forgiving than kernel-space, at least back then it was. These days things seem to have improved a lot... Programming mistakes don't seem to cause reboots or disk corruption anymore.
Putting things in perspective, what do you think about ALSA's efforts to get their own implementation of the FireWire drivers? Will your project converge or conflict at some point? Or will they merely coexist?
Pieter: I think there is no divergence at this time. The implementation of the streaming API's in ALSA is the next step for FFADO. As far as I understood, it is also the idea of the ALSA people (i.o.w. Clemens) to keep the discovery and control largely outside of the kernel.
Jonathan: From my communications with Clemens Ladisch, it's not really a case of ALSA trying to get their own implementation of the FireWire drivers. For a number of years, the FFADO project has recognised that the streaming component of our driver (the part which details with audio and MIDI data) would be more stable and more efficient if implemented as a kernel module.
However, none of us have the necessary kernel programming knowledge to make this work. Clemens has started work on a proof-of-concept driver, and the intention is that once complete it can be used by us as a template for the implementation of streaming engines for the different devices we support.
When that happens, the FFADO project will continue to exist. The streaming component of libffado will go away since it will be in the kernel, but the device control aspects (that is, the control of on-board mixers and other device configuration settings) will probably continue in their present form (or something close to it).
The reason for this is that the devices differ in so many ways— both in terms of the wire protocol and user interface requirements— that it is probably not possible to come up with an abstraction system which will work for all interfaces.
The device control is also quite complex and if integrated into the kernel would result in a large amount of very complex code— and it's felt that this would be much easier to maintain in userspace.
So in many ways, Clemens's proof of concept work is more an enabler, ultimately providing us a clearly defined way towards one of our longer-term goals. It's a welcome development.
The ease at which FFADO can shift its streaming subsystem into the kernel comes about because of the way that device streaming and control has been carefully separated from the very beginning. This means that we can remove the streaming parts without affecting device control in any way.
In fact, even now the device control part of FFADO would happily run on a system with a hypothetical kernel streaming driver.
Daniel: I really think that the data processing part (streaming) should go into the kernel. Pieter tried quite a few approaches to get the streaming part in the kernel land rock solid. None of them has worked perfectly. So the approach Clemens is taking seems the right next thing to do.
The control part can stay in user land, though the difficult question to answer how does the API between both components look like? And then you have to get that API accepted in the kernel. This can be a very difficult task.
Supposing ALSA manages to implement drivers for all devices currently supported by FFADO. What would be FFADO's strong points even then? Or will it be a case of “Mission accomplished”? :)
Jonathan: The implementation of the kernel-mode drivers for most of the FFADO devices will be up to the FFADO project; at least that's how it looks like it'll pan out at this stage. FFADO will continue to develop support for new interfaces and maintain the existing ones.
The only major difference between this and the present situation is that our streaming code will reside in the kernel instead of in our repository. In this respect the situation will be similar I expect to ALSA itself: the ALSA project manages both the in-kernel driver plus a bunch of related userspace utilities.
Pieter: I would be very happy if FFADO becomes obsolete, e.g. if it merges into the ALSA code tree. The Linux ecosystem is complicated enough as it is, we don't need yet another driver layer. The sooner FFADO is obsolete, the better. But in the meantime we have something that is usable and that cleared the path for quite some devices.