Paul Kocialkowski's coding blog

Free software, programming and stuff

Reverse engineering the Elan KTF2K touchscreen driver

Written by Paul Kocialkowski 3 comments
EKTF2K

I recently acquired an Allwinner A13 unbranded tablet in order to port Replicant to it: this platform is well supported by free software (the Linux kenrel and the u-boot bootloader) and there is an active community of developers working on free software for the Allwinner A1x platforms: linux-sunxi.

The tablet I ended up with contains an Elan EKTF2000 touchscreen, but I couldn't find any touchscreen driver for it in the linux-sunxi kernel tree: the source code was just not released, even though it's marked as being GPL-licensed. Moreover, since the tablet is unbranded, there was no one I could contact to request source code. So I asked around, and it turned out that nobody knew about source code that would have been released for that touchscreen. However, the tablet came with Android preinstalled and there was an ektf2k.ko module.

After some research, I finally found a driver for elan ktf2000 touchscreens written by HTC. It seemed to match mine (both use I2C) and preliminary tests revealed that the same protocol (on top of I2C) was used by my touchscreen. However, it was not quite enough to write an usable implementation for my device: as a matter of fact, the returned coordinates from my touchscreen did not match the screen size: it reported values up to 896x576 while the screen size is 800x640. So the whole issue was about figuring out these values (896 and 576) at run time in order to scale down to the actual screen size.

The preinstalled Android system came with a kernel module called ektf2k.so which is the actual driver. When loaded, I saw this message on the kernel logs:

[elan] __fw_packet_handler: x resolution: 576, y resolution: 896

Which meant that this driver had the code to get the values from the touchscreen chip.

I quickly understood how the touchscreen protocol works by reading the HTC driver, and it turned out that requests were arrays of 4 bytes, with the first one set to 0x53 (indicating a request) and the second one set to a particular command (indicating what we request). Now considering that requests are usually static tables that are defined in code (that's the way it's done in the HTC driver), declared at the beginning of the function, I knew that the static array of 4 bytes corresponding to the request for the size I needed to find out was held somewhere in the ektf2k.ko module.

Thanks to objdump, I decompiled the module (it is legal to perform such reverse engineering in Europe) and looked at the assembly code for the function __fw_packet_handler. I clearly saw the different calls to elan_ktf2k_ts_get_data and printk, but no sign of the data packets. I then looked at the .rodata section, that contains, as its name suggests, the read-only data, where the packets would likely be stored. The string “__fw_packet_handler” is stored at offset 0170 In this section. Right before, I found the following data:

 0160 53000001 53600000 53630000 53f00001  S...S`..Sc..S...

Looks very much like static arrays of data with the first byte set to 0x53! So I tried issuing requests with the commands 0x00, 0x60, 0x63 and 0xf0 and received the height with 0x60 and the width with 0x63! It was not in the most obvious format but 576 is 0x240 and 896 is 0x380, so it was easy to see that the responses were containing these values.

In the end, I completed my implementation, and I now have a fully working touchscreen (with up to 5 contact points) with the free software kernel!
The patch is: input: Elan KTF2K touchscreen driver with CTP bindings

Update: I recently found the source code that partially matches the kernel module preinstalled in my tablet: ektf2k.c

Orientation vector calculation from accelerometer and magnetometer sensors in Android

Written by Paul Kocialkowski no comments

While reversing the Galaxy Tab 2 sensors, I have been looking for a way to calculate the orientation vector from acceleration and magnetic field vectors: I've looked at any sensors implementation I could find and each time, this was being held into some proprietary component, to the point that the Galaxy Tab 2 has an user-space blob dedicated to this task (orientationd). Since I am not an expert at physics, I soon gave up on writing a free orientationd implementation, which was really a shame given the time I spent making the geomagnetic sensor work properly. I just realized that there was one last implementation I didn't look at, that is the free software user-space program for AKM8975. So many thanks to Asahi Kasei: I was able to reuse that code directly and it worked perfectly at first try.
That's pretty amazing!

All the glory details are in the akmdfs/AKFS_APIs_8975/AKFS_Direction.c file. To put it in a nutshell:

orientation

In the end, I think this piece of code from that free software implementation made my day!

Galaxy S2 Replicant port status update

Written by Paul Kocialkowski 2 comments

Quite some time ago, I was given the opportunity to receive a crowd-funded Galaxy S2 phone. Even though I was very thankful for it, I couldn't really focus on it at first since I had to handle other things on various other devices I was working on. It left me somehow sad as I felt that it was my duty to add proper Replicant support for it. Today, I'm proud to announce that the biggest part of the work to support it is over.

The modem (XMM6260)

At first, we had to add support for the modem, an XMM 6260 modem with a custom Samsung firmware. The modem protocol is what we call Samsung IPC, the very same as the one used in the Nexus S or Galaxy S. Our lower-layer library to handle it is libsamsung-ipc, that is shared between Replicant and SHR. So We had to add support for XMM6260 in libsamsung-ipc, along with Galaxy S2-specific bits. Thoughtfully, we designed the upper layer, Samsung-RIL (that is specific to Replicant) to work with libsamsung-ipc regardless of the device it's running on. Nowadays, the modem support is complete and we have working calls, messages and data. Anyway, modem features support is up to Samsung-RIL, so it's not Galaxy S2-specific.

The Audio CODEC (Yamaha MC1N2)

After doing a break in Galaxy S2 development, I finally got back to it, and started the 4.0 Replicant version for the occasion. Since the audio module was non-free in CyanogenMod, it was one of the key components to add support for. (What good is a phone if you can't get any sound out of it?) So after digging a little in the kernel code, it turned out that the Audio CODEC had an ALSA interface driver. That means PCM In/Out interfaces as well as Mixer controls. Only problem was that I still couldn't get any sound out of it using the TinyALSA test utils. After doing a bit of research, I found out about the /dev/snd/hwC0D0 node, that was implementing hardware-specific controls (via ioctl). After adding debug prints to it and with the help of some CyanogenMod developers, I was able to reimplement it on my Yamaha-MC1N2-Audio library. The ALSA part was done with a 4.0 update (call it a complete rewrite) of my TinyALSA-Audio library. The combination of the two made it possible to have sound with Replicant (including during calls). It is even used by CyanogenMod since version 10.1!

The sensors (K3DH accelerometer)

With modem and audio support, the Galaxy S2 was made usable as a phone. Thanks to the free hwcomposer module, it's very fast too, so I decided to use it as a main phone for a time, and frankly quite enjoyed the ride. The sensors were also relying on a non-free library, the one called libakm: AKM is the compass manufacturer. Nonetheless, it includes the bits to properly handle the K3DH accelerometer chip too. The situation is quite similar to the Nexus S sensors, and I was able to figure out the accelerometer part back then (it was a KR3DH) and implemented it in the libakm_free library. Since it was quite easy for Nexus S (libakm was just a passthrough), I gave it a try on the Galaxy S2. After tracing the K3DH kernel driver, I figured that the values returned by libakm were just the result of linear functions applied to the data returned by the kernel. I renamed libakm_free to Samsung-Sensors and added support for the K3DH there.

The cameras (M5MO/S5K5BAFX)

Galaxy S2 Camera

Galaxy S2 support was then already pretty decent, and I was kind of proud of myself. Though, it take a look at the Galaxy S2 characteristics, you'll see that one of its key features is the 8MP camera it embeds. And sadly, there was no usable camera module around. Though, it appeared to have a V4L2 driver, which is pretty standard and easy to implement. However, I feared that I'd have to face the same situation as audio: standard interface but only usable with a non-trivial interface aside. Once again, I traced the kernel driver and started implementing, step by step. After a couple weeks of work (I wrote the implementation from scratch and obviously couldn't spend time on it everyday), it appeared that the original non-free camera module was doing a lot of unnecessary output/overlay operations. So I decided to cut out the crap and get to the essential, that is only using the capture V4L2 interface. This comes with some issues such as the inability to resize/crop the output buffer, but I think I found acceptable workarounds for that. In the end, my camera module turned out to work quite well and is now fully-featured (except EXIF that is currently broken, but it's such a pain in the ass that I don't really want to get into it and fix things). I pushed the code on the Galaxy S2 device tree as well as on my personal Exynos Camera git repo.

The future?

Now the Galaxy S2 is supported as well as the Nexus S in Replicant and the missing (and doable) parts left are mainly GPS and compass. The compass is an AKM8975 chip. Some code was released by AKM for this chipset and even though my first attempts to make it work failed, I guess there is a way to have it working properly. I didn't renew my attempts since this is quite a detail and there is probably more important things to work on at the moment. That's for instance the GPS: it's a GSD4t chip, the very same as the Galaxy Nexus. It needs a firmware upload and uses a SiRF-derived protocol that does not seem to be documented anywhere. I hope we'll be able to figure it out somehow: it would be very nice to have GPS support on these two devices!

What's up with the Android SDK?

Written by Paul Kocialkowski 11 comments

A couple days ago, I announced, on the behalf of the Replicant project, the release of the Replicant 4.0 SDK, motivated by some recent license change regarding the Android SDK: Google decided to put an overall non-free license for their SDK. After talking about it on IRC, FSFE member Torsten Grote decided to write an article to raise public awareness: Android SDK is now proprietary, Replicant to the rescue. In the next few hours, that news was being relayed by some online IT media and it sometimes got a bad review, calling our statement a “non-issue”. Let's check our facts and clear out the situation.

It was first brought to our attention that the SDK is being released under a non-free license only a couple days before releasing the Replicant SDK. We didn't know about it before and thus assumed that this was a recent license change. As a matter of fact, we were wrong: we have been told since then that these terms of use have been there all along the way and only a small part about fragmentation have been added with the Android 4.2 release. However, as far as I can remember from the past (and please let me know if I'm wrong), the end-user didn't have to explicitly agree to these terms before downloading the SDK. Now they are required to do so before being able to download the SDK package. That's one first thing that we find unacceptable: we believe that anyone should be able to develop applications for the Android platform without having to agree to such terms. Now let's take a closer look at what the user must actually agree to:

you may not: (a) copy (except for backup purposes), modify, adapt, redistribute, decompile, reverse engineer, disassemble, or create derivative works of the SDK or any part of the SDK; or (b) load any part of the SDK onto a mobile handset or any other hardware device except a personal computer, combine any part of the SDK with other software, or distribute any software or device incorporating a part of the SDK.

These conditions seem totally unacceptable to me and are likely to cause a reaction such as calling the Android SDK proprietary from anyone who values software freedom. However, let's not stop there and let me get back to what is right before these statements in the license text:

You may not use the SDK for any purpose not expressly permitted by this License Agreement. Except to the extent required by applicable third party licenses, 

The last sentence is the meaningful one: it means that basically, the restrictions are not applicable to software that is covered by another free software license. So that's basically how Google can avoid breaking other licenses terms. Moreover, Google is not the copyright holder for all of the software released in the SDK, so they basically have no right to apply such restrictions to it. Huh, we're safe, after all, the Android SDK still is free software. But wait, is it really? Are all the files shipped with the Android SDK proven to be free software? If that was the case, then why would Google waste some time writing down these terms if they actually do not apply to anything in the SDK? So that point gives us fair enough reasons to suspect that there is actually proprietary software in the SDK. Yet another good reason to release a free SDK such as the Replicant SDK. Now let's consider the Android SDK manager utility: it is designed, down to the source code, to check for plug-ins and updates from Google. If I recall correctly (once again, correct me if I'm wrong), there used to be a clear license statement for each components: the Google APIs were shown as non-free software in an explicit way and the emulator images were somewhat shown as containing mainly free software. Now Google changed all this, and all the components show the same EULA terms. Now how can the user make any difference between what's free and what's not in that components list? Sounds harder than it used to be, and like a problem to us. That's why the Replicant SDK won't check for new components from Google. So here are the reasons why we call the Android SDK proprietary and why we think that there is a problem with it. Even though not all of this is a sudden change, why would it be any less relevant to try and raise public awareness about the issues we've spotted?

2013-01-06 Update: I've checked the license of the individual software components shipped with the Android SDK and it turns out that all of them are covered by a free software license. What's the point of that overall proprietary license then?

Contributing to OpenStreetMap part 2: Gathering data on the field

Written by Paul Kocialkowski no comments

I could keep writing a lot about the ideas behind the project, my personal motivation and such but well, OpenStreetMap is one of the rare projects I'm contributing to that actually require people to get out and see things for themselves ! So that's very good for us hackers that are used to do our work behind a screen: for once, we're required to get some fresh air. That has been a good opportunity for me to discover more of the city where I live, do some sports by the way and discover many relaxing places in a natural environment.

Bike

From the moment I started mapping, I always used a bike to move around town. That's probably the best way to catch every detail surrounding your ride, making it easy to stop at any moment and any place. I've done mapping on my feet a couple time too, it's good when there is a high-density of POI (Points Of Interest), like in the town centre. If this is not the case where you live, you are most likely going to waste a lot of valuable time. On the other hand, this can be a fun way to spend some hours to kill in the middle of the day.

The first device I used to do mapping was the Neo FreeRunner and its embedded GPS. I also got an external antenna to improve the reliability of the traces. On the software side, I was most of the time using Hackable:1 and TangoGPS but at some point switched to SHR with FoxtrotGPS. That was pretty nice to use, except that the keyboard with very small and required me to carry a pen. I attached the FreeRunner to my bike using cable ties though I had to drive very carefully to avoid damaging the phone.

Walking paper

Carrying a sheet of paper and a pen can also turn out to be very useful to draw a quick sketch of the ways and their names. When possible, printing walking papers (black and white is fine) with the already mapped OSM informations helps a lot too. I'm not a regular user of these methods but from time to time it helps, especially when there is not a lot of informations already available on OSM. Another kind of complementary mapping technique that I used from time to time is voice recording: this permits to be very precise in the description. These techniques are used best along with regular GPS tracking.

Tablet

As soon as all the streets were properly tagged, I focused on adding particular POI such as stores, bus stops, public buildings, etc. Thanks to the Cadastre, we have the detail of every building available in OSM, so we can precisely place POI without the need of GPS traces. The FreeRunner remains relevant for this task but just as well as other devices. At some point, I decided to switch to Android devices to map (with the OsmAnd app, that is free software). Since most of these devices come with a camera, it also permits to take pictures of the place quickly while mapping. A phone is fine for that, but the best I've found is a tablet with a large screen: you can place the POI precisely that way and enjoy the large keyboard. OSM mapping is one of the tasks you'll really enjoy doing with a tablet more than any other device.

Rss feed of the articles