Paul Kocialkowski's coding blog

Free software, programming and stuff

Replicant

What's up with the Android SDK?

Written by Paul Kocialkowski - 05 january 2013 - 11 comments

A couple days ago, I announced, on the behalf of the Replicant project, the release of the Replicant 4.0 SDK, motivated by some recent license change regarding the Android SDK: Google decided to put an overall non-free license for their SDK. After talking about it on IRC, FSFE member Torsten Grote decided to write an article to raise public awareness: Android SDK is now proprietary, Replicant to the rescue. In the next few hours, that news was being relayed by some online IT media and it sometimes got a bad review, calling our statement a “non-issue”. Let's check our facts and clear out the situation.

It was first brought to our attention that the SDK is being released under a non-free license only a couple days before releasing the Replicant SDK. We didn't know about it before and thus assumed that this was a recent license change. As a matter of fact, we were wrong: we have been told since then that these terms of use have been there all along the way and only a small part about fragmentation have been added with the Android 4.2 release. However, as far as I can remember from the past (and please let me know if I'm wrong), the end-user didn't have to explicitly agree to these terms before downloading the SDK. Now they are required to do so before being able to download the SDK package. That's one first thing that we find unacceptable: we believe that anyone should be able to develop applications for the Android platform without having to agree to such terms. Now let's take a closer look at what the user must actually agree to:

you may not: (a) copy (except for backup purposes), modify, adapt, redistribute, decompile, reverse engineer, disassemble, or create derivative works of the SDK or any part of the SDK; or (b) load any part of the SDK onto a mobile handset or any other hardware device except a personal computer, combine any part of the SDK with other software, or distribute any software or device incorporating a part of the SDK.

These conditions seem totally unacceptable to me and are likely to cause a reaction such as calling the Android SDK proprietary from anyone who values software freedom. However, let's not stop there and let me get back to what is right before these statements in the license text:

You may not use the SDK for any purpose not expressly permitted by this License Agreement. Except to the extent required by applicable third party licenses, 

The last sentence is the meaningful one: it means that basically, the restrictions are not applicable to software that is covered by another free software license. So that's basically how Google can avoid breaking other licenses terms. Moreover, Google is not the copyright holder for all of the software released in the SDK, so they basically have no right to apply such restrictions to it. Huh, we're safe, after all, the Android SDK still is free software. But wait, is it really? Are all the files shipped with the Android SDK proven to be free software? If that was the case, then why would Google waste some time writing down these terms if they actually do not apply to anything in the SDK? So that point gives us fair enough reasons to suspect that there is actually proprietary software in the SDK. Yet another good reason to release a free SDK such as the Replicant SDK. Now let's consider the Android SDK manager utility: it is designed, down to the source code, to check for plug-ins and updates from Google. If I recall correctly (once again, correct me if I'm wrong), there used to be a clear license statement for each components: the Google APIs were shown as non-free software in an explicit way and the emulator images were somewhat shown as containing mainly free software. Now Google changed all this, and all the components show the same EULA terms. Now how can the user make any difference between what's free and what's not in that components list? Sounds harder than it used to be, and like a problem to us. That's why the Replicant SDK won't check for new components from Google. So here are the reasons why we call the Android SDK proprietary and why we think that there is a problem with it. Even though not all of this is a sudden change, why would it be any less relevant to try and raise public awareness about the issues we've spotted?

2013-01-06 Update: I've checked the license of the individual software components shipped with the Android SDK and it turns out that all of them are covered by a free software license. What's the point of that overall proprietary license then?

Writing a fully-featured, clear and flexible ALSA libaudio for Gingerbread: TinyALSA-Audio

Written by Paul Kocialkowski - 01 august 2012 - 2 comments

As I am porting Replicant, our fully free Android derivate, to the new Goldelico GTA04, I had to deal with ALSA user-space integration in Gingerbread. So let's take a quick look about it:
On Gingebread (and previous versions), user-space audio is done via the libaudio library. The Android framework will basically interact with AudioFlinger that is the component in charge of loading that libaudio library and dealing with it (that's frameworks/base/services/audioflinger/AudioFlinger.cpp).
So libaudio is basically the place where PCM read/write and mixer stuff happens. As we started looking into the various existing ALSA libaudio, my fellow Replicant developer GNUtoo told me about TinyHAL, a clean and flexible audio module that does ALSA and routing from XML configuration files. Too bad, TinyHAL was designed for Ice Cream Sandwich, and the Audio API changed in ICS (basically, it is now a module, like the ones for gps, sensors, lights, etc). So I couldn't use it as-is, but there were various concepts I hoped I would be able to reuse, like the XML routing config or the use of the TinyALSA lib, that is very clean, simple to use and handles ioctl-level ALSA stuff.

In the end, I decided to write my own libaudio, using both TinyALSA and XML config files, that I called TinyALSA-Audio. Audio output and Mixer was quite straightforward and worked quickly. AudioFlinger basically opens the AudioStreamOut at 44100Hz, 2 chans, S16_LE format (signed 16bits, little endian), which works fine with the audio hardware, a TWL4030 Codec here, that didn't complain at all.

Next step was about audio input and recording. At first sight, I thought it would be as easy as audio output, that is just setting the controls, opening the device via TinyALSA, sending the config following what AudioFlinger asks and just read the data. So I basically wrote code that was that simple, but it failed while setting the config: cannot set hw params: Invalid argument was the error. As I didn't see what could possibly be wrong about the params, I decided to take a look at how tinycap, the capture utility that comes with TinyALSA handles things. It actually sets the params at 44100Hz, 2 chans, S16_LE, and recording works then. When I tried to force the params to what AudioFlinger asks, that is 8000Hz, 1 chan, S16_LE, I got the very same error as on my libaudio: cannot set hw params: Invalid argument.

So what was it all about? Does the CODEC only records at 44100Hz, 2 chans? I tried with zygote stopped, and then 8000Hz mono worked. I also checked in the kernel code: the TWL4030 CODEC is supposed to work at 8000Hz mono as well. So I deducted that when the output device is opened, the input device will only work at the same config (rate, chans). What a bummer! AudioFlinger asks for 8000Hz mono, not 44100hz stereo, though that's all I can get when output is opened (which is always the case when zygote is running).

Thanks to the AOSP, there are various others libaudio that I could use to learn things. First thing to know was if that "issue" was specific to TWL4030 or common to all ALSA CODECS. So I did the same test (with tinycap) on the Nexus S and Galaxy S, and the result was that both couldn't record at any other rate/chans that the ones set when opening output. Galaxy Nexus audio module confirmed that too. So what is the solution here? Obviously, it consists in finding a way to return 8000Hz mono data to AudioFlinger while reading 44100Hz stereo data: that's resampling.

Galaxy S and Nexus S libaudio all handled resampling by internal algorithms, which seemed a pain to use and adapt on my libaudio. The solution finally came from the Galaxy Nexus audio module. And I really want to send a big Thank-You to the people who wrote it. First of all, it uses the very same TinyALSA as my libaudio uses. Second thing is that it doesn't embed complicated algorithms for resampling but uses the new Android 4 framework for resampling, that wasn't so hard to understand. behind that engine is the libspeexresampler lib, that is part of speex code. So all I had to do was to backport that Android 4 resampling code, enable to build of libspeexresampler in gingerbread and make use of all that in my lib.

Though, wait a second, when reading AudioFlinger code, it all seemed to indicate that AudioFlinger embeds its own resampling engine, so that when the lbiaudio reports different parameters, it handles resampling to what it wants (mostly 8000hz mono). So I tried to make use of that, in vain. I couldn't figure out why it didn't work, it just didn't. I read the AudioFlinger code several times, made sure the resampler was enabled and all, but in the end, the produced audio was just garbage, so I just gave up on using AudioFlinfer's resampler. After all, if all the AOSP libaudio do not make use of it, there might be a reason. So the thing appears to be totally messed-up and nobody cares enough to fix it but rather implement resampling in the libaudio itself. Not a very good thing for me, I would have preferred to leave resampling to the upper layer, but anyway, I had Galaxy Nexus code that could be adapted to my lib. So that's basically what I did: backporting the Android 4 resampler code to my libaudio, making use of it and using ICS external/speex repo. In the end of this misadventure and with some fine tuning, it all worked.

That's pretty much the end of the story, now I have my TinyALSA-Audio lib that handles input/output routing via mixer audio output and audio input as well, to various rates. Here are some links: