Paul Kocialkowski's coding blog

Free software, programming and stuff

The Samsung S5C73M3 interleaved format

Written by Paul Kocialkowski no comments

I am currently working on writing a free software replacement for the Galaxy S3 camera module, based on the Exynos Camera module I wrote a couple months ago for the Galaxy S2. Both are using V4L2, but the implementation differs in details. Especially, the Galaxy S3's back camera, the Samsung S5C73M3, uses an interleaved format for picture capture.

As an interleaved format, there is no standard and readily-usable implementation to decode the data. After searching for a long time, all I could find was a commit by one of Samsung's developers that introduced that format to mainline, through a LinuxTV patch. First of all, I can't seem to understand why such a patch was accepted mainline given that there is no decoder implementation for that format out there. Moreover, the only camera chip that uses it, the S5C73M3, has a driver that was also accepted in mainline. It seems to me like it was blindly included and nobody cared so much about how it works in practice. Moreover, it seems that this camera chip is mostly found in the Galaxy S3, and I doubt anyone tested mainline on the Galaxy S3 to see whether the S5C73M3 driver works and gives appropriate results.

However, let's not complain too much, that patch gave me crucial info to understand how to properly extract YUV and JPEG from the interleaved data. For reference, here are the explanations given with the patch:

Two-planar format used by Samsung S5C73MX cameras. The first plane contains interleaved JPEG and UYVY image data, followed by meta data in form of an array of offsets to the UYVY data blocks. The actual pointer array follows immediately the interleaved JPEG/UYVY data, the number of entries in this array equals the height of the UYVY image. Each entry is a 4-byte unsigned integer in big endian order and it's an offset to a single pixel line of the UYVY image. The first plane can start either with JPEG or UYVY data chunk. The size of a single UYVY block equals the UYVY image's width multiplied by 2. The size of a JPEG chunk depends on the image and can vary with each line.

The second plane, at an offset of 4084 bytes, contains a 4-byte offset to the pointer array in the first plane. This offset is followed by a 4-byte value indicating size of the pointer array. All numbers in the second plane are also in big endian order. Remaining data in the second plane is undefined. The information in the second plane allows to easily find location of the pointer array, which can be different for each frame. The size of the pointer array is constant for given UYVY image height.

In order to extract UYVY and JPEG frames an application can initially set a data pointer to the start of first plane and then add an offset from the first entry of the pointers table. Such a pointer indicates start of an UYVY image pixel line. Whole UYVY line can be copied to a separate buffer. These steps should be repeated for each line, i.e. the number of entries in the pointer array. Anything what's in between the UYVY lines is JPEG data and should be concatenated to form the JPEG stream. 

At first, I was only getting the first 0xA00000 bytes, which is in fact only the first plane. Hence, I couldn't find the offset to that pointers array (even though I could locate it manually). I had to enable embeded data with the V4L2_CID_EMBEDDEDDATA_ENABLE control. With that, the buffer gets 0x1000 more bytes: that's the second plane. Then by applying an offset of 4084 bytes to the start of that second plane, I could locate the offset to the pointers array.

Since I complained it was lacking, I wrote a reference implementation that separates the YUV (it's actually UYVY) and JPEG data from the interleaved format: s5c73m3_interleaved_decode.c.

2013-08-06 Update: As I sent an email to the Samsung developers involved in the mainline patch, I was given details on the format (that I already figured out though) as well as a C implementation to separate JPEG and UYVY. The developer also told me he is going to release sample code to decode the format, publicly. So I think things are going to be fine, and my criticism will soon no longer be valid. Yay!

Galaxy S2 Replicant port status update

Written by Paul Kocialkowski 2 comments

Quite some time ago, I was given the opportunity to receive a crowd-funded Galaxy S2 phone. Even though I was very thankful for it, I couldn't really focus on it at first since I had to handle other things on various other devices I was working on. It left me somehow sad as I felt that it was my duty to add proper Replicant support for it. Today, I'm proud to announce that the biggest part of the work to support it is over.

The modem (XMM6260)

At first, we had to add support for the modem, an XMM 6260 modem with a custom Samsung firmware. The modem protocol is what we call Samsung IPC, the very same as the one used in the Nexus S or Galaxy S. Our lower-layer library to handle it is libsamsung-ipc, that is shared between Replicant and SHR. So We had to add support for XMM6260 in libsamsung-ipc, along with Galaxy S2-specific bits. Thoughtfully, we designed the upper layer, Samsung-RIL (that is specific to Replicant) to work with libsamsung-ipc regardless of the device it's running on. Nowadays, the modem support is complete and we have working calls, messages and data. Anyway, modem features support is up to Samsung-RIL, so it's not Galaxy S2-specific.

The Audio CODEC (Yamaha MC1N2)

After doing a break in Galaxy S2 development, I finally got back to it, and started the 4.0 Replicant version for the occasion. Since the audio module was non-free in CyanogenMod, it was one of the key components to add support for. (What good is a phone if you can't get any sound out of it?) So after digging a little in the kernel code, it turned out that the Audio CODEC had an ALSA interface driver. That means PCM In/Out interfaces as well as Mixer controls. Only problem was that I still couldn't get any sound out of it using the TinyALSA test utils. After doing a bit of research, I found out about the /dev/snd/hwC0D0 node, that was implementing hardware-specific controls (via ioctl). After adding debug prints to it and with the help of some CyanogenMod developers, I was able to reimplement it on my Yamaha-MC1N2-Audio library. The ALSA part was done with a 4.0 update (call it a complete rewrite) of my TinyALSA-Audio library. The combination of the two made it possible to have sound with Replicant (including during calls). It is even used by CyanogenMod since version 10.1!

The sensors (K3DH accelerometer)

With modem and audio support, the Galaxy S2 was made usable as a phone. Thanks to the free hwcomposer module, it's very fast too, so I decided to use it as a main phone for a time, and frankly quite enjoyed the ride. The sensors were also relying on a non-free library, the one called libakm: AKM is the compass manufacturer. Nonetheless, it includes the bits to properly handle the K3DH accelerometer chip too. The situation is quite similar to the Nexus S sensors, and I was able to figure out the accelerometer part back then (it was a KR3DH) and implemented it in the libakm_free library. Since it was quite easy for Nexus S (libakm was just a passthrough), I gave it a try on the Galaxy S2. After tracing the K3DH kernel driver, I figured that the values returned by libakm were just the result of linear functions applied to the data returned by the kernel. I renamed libakm_free to Samsung-Sensors and added support for the K3DH there.

The cameras (M5MO/S5K5BAFX)

Galaxy S2 Camera

Galaxy S2 support was then already pretty decent, and I was kind of proud of myself. Though, it take a look at the Galaxy S2 characteristics, you'll see that one of its key features is the 8MP camera it embeds. And sadly, there was no usable camera module around. Though, it appeared to have a V4L2 driver, which is pretty standard and easy to implement. However, I feared that I'd have to face the same situation as audio: standard interface but only usable with a non-trivial interface aside. Once again, I traced the kernel driver and started implementing, step by step. After a couple weeks of work (I wrote the implementation from scratch and obviously couldn't spend time on it everyday), it appeared that the original non-free camera module was doing a lot of unnecessary output/overlay operations. So I decided to cut out the crap and get to the essential, that is only using the capture V4L2 interface. This comes with some issues such as the inability to resize/crop the output buffer, but I think I found acceptable workarounds for that. In the end, my camera module turned out to work quite well and is now fully-featured (except EXIF that is currently broken, but it's such a pain in the ass that I don't really want to get into it and fix things). I pushed the code on the Galaxy S2 device tree as well as on my personal Exynos Camera git repo.

The future?

Now the Galaxy S2 is supported as well as the Nexus S in Replicant and the missing (and doable) parts left are mainly GPS and compass. The compass is an AKM8975 chip. Some code was released by AKM for this chipset and even though my first attempts to make it work failed, I guess there is a way to have it working properly. I didn't renew my attempts since this is quite a detail and there is probably more important things to work on at the moment. That's for instance the GPS: it's a GSD4t chip, the very same as the Galaxy Nexus. It needs a firmware upload and uses a SiRF-derived protocol that does not seem to be documented anywhere. I hope we'll be able to figure it out somehow: it would be very nice to have GPS support on these two devices!

Rss feed of the tag
Fatal error : type : 8192 message : Function utf8_decode() is deprecated file : /core/lib/class.plx.utils.php line : 843 See https://www.php.net/manual/en/errorfunc.constants.php about type of error ============================================================ Drop this plugin now for running PluXml and report to its author !!