Paul Kocialkowski's coding blog

Free software, programming and stuff

Liberating Amusing Use Cases with the F60 Action Camera Introduction: Motivations for Going Down the Road

Written by Paul Kocialkowski 1 comment
This series of blog posts is set to cover the journey of adding upstream U-Boot and Linux support for the F60 Action Camera, covering another exciting new use case with freedom in digital technology!

Allwinner and Replicant Work

As I previously mentioned on the Replicant blog, I have started a final engineering internship at Bootlin (formerly Free Electrons) in Toulouse, France. The internship is focused on bringing upstream support for the VPU used in Allwinner platforms, as a continuation of the long-standing reverse engineering effort carried out by the linux-sunxi community.

A10 Tablet XFCE
XFCE running on an A10 tablet
This effort is also a continuation of my ongoing personal interest in these platforms, especially in mobile form factors such as widely-available tablets. Although I am no longer involved with contributing on the technical side of Replicant, I still care for the project and would like to see it move forward. Finding new developers for Replicant is hard and it's fair to say that a large part of the project is kept alive by its community of users and contributors. The project inherits from its upstream (currently LineageOS, that counts more developers by far), but a significant number of adaptations have to be introduced to workaround the use of proprietary blobs for hardware support, for each and every new version and device we want to support.

In the free software community, most of the effort related to liberating hardware support in a sustainable and future-proof direction happens (sometimes inadvertently so) in upstream projects such as Linux, coreboot and U-Boot. These projects usually drag more than enough contributions to keep their maintainers busy. Since features are added through a code review process (that requires submitting clean and maintainable code in the first place), the set of features supported by mainline is slow to pick-up with downstream modified versions provided by manufacturers, that are used instead in Replicant. For the mobile use cases that we aim to cover, the constraints are high in terms of required features and power management. This is why it did not seem like a good idea to ship mainline Linux in Replicant at first. However, using downstream software components has a cost in terms of maintenance (no kernel updates), security (no kernel updates) or long-term sustainability and compatibility with newer technologies (still no kernel updates). The amount of required time spent dealing with code where only the very minimum amount of structure and conventionality is in place is also very significant.

U-Boot 2011.09-rc1-00000-gf3ad3c0-dirty (Jan 09 2018 - 18:05:09)

After a while working on Replicant, I became more and more reluctant to dedicate time to new contributions, mainly for the fact that I had to deal with downstream code that was just very unpleasant. It also felt like all the time and effort put-in could only ever have a very short term impact. That the level of contributed software support was only useful as long as the Android and Linux versions stayed relevant (and well, we can't say that they stay relevant for a very long time for these use cases).

This is why I was dragged more and more towards mainline software.

A Plea for Upstream Software

The effort to liberate hardware support is much more significant with mainline, but these contributions do tag along and stay, as long as there are people to care for them. The decisions regarding the code in future versions are also taken by the community instead of a single entity, although the community may also take decisions detrimental to freedom in digital technology. With mainline support landed in Linux, the userspace interfaces for systems are standardized and much less work has to be put-in on the userspace side (or more precisely, it only has to be done once and incrementally updated with new features). This means less involvement required on the Replicant side, with the ultimate goal of making Replicant a sheer rebranding of Lineage OS without proprietary blobs and using only generic Hardware Abstraction Layers (HALs) shared across Android systems. The Android-x86 project already has such HALs for various aspects of userspace software support (some of which where developed by companies like Intel that stick close to mainline), but there is still work left to do to adapt these HALs for mobile uses cases. Since these could benefit any Android system, it would be interesting to bring a coordinated effort for supporting mainline kernel interfaces through generic Android HALs (something an entity like Linaro could care for). This would allow redirecting the development effort for hardware support away from the downstream userspace side of Android systems and focused on mainline instead. Of course, mainline support also benefits other GNU/Linux systems and opens up the possibility of properly supporting mobile use cases in these systems, while also enabling new use cases.Purism is currently working in that direction, with projects to integrate KDE and GNOME-based interfaces to a mobile telephony-enabled device. Running GNU/Linux on mobile devices is not exactly new though, as a first wave of interest rose around 2008 from the Openmoko community, revolving around the Neo FreeRunner phone running with GNU/Linux. One of the systems supporting that phone was SHR (for Stable Hybrid Release) that was based on the Freesmartphone.org (FSO) framework for mobile devices and went on to support other devices such as the N900 or the Nexus S. But just like with the original systems for the FreeRunner, using regular GNU/Linux applications on SHR was hard and required using a stylus. The interface was obviously not designed nor adapted for the mobile use case, just like plenty of other system components. On the kernel side, the amount of changes required for the FreeRunner made the upstreaming process slow (and it was still not fully completed to this day).

GTA01 GTA02 GTA04
The GTA01 development device, the Neo FreeRunner (GTA02) and the GTA04

Nowadays mobile devices revolve around the ARM instruction set (and the very commonly found ARM Cortex core implementations from the company) and ARM support in the mainline Linux kernel has come a long way since 2008. Most notably, the introduction of the OpenFirmware device-tree (initially designed for supporting PowerPC devices) allowed a clean distinction between the hardware description and the drivers, which were previously glued together in (plenty of) platform data initialization laid out directly in the source code. Back then, adding support for a new device was tedious and each kernel had to be compiled for a precise device. Device-tree changed the game here, as it became much easier to introduce new device-specific support and possible at all to run the same kernel binary on different devices with the same ARM instruction set. This also made platform bringup a more streamlined task (by easily reusing similar blocks from previous generations and applying appropriate quirks at run-time). The number of supported ARM platforms, features and use cases kept on increasing, with the commitment of big companies like TI, Samsung, Google, ARM, Freescale/NXP or Rockchip, entities like Linaro or the linux-sunxi community but also smaller players (often providing upstreaming services to companies) like Bootlin (formerly Free Electrons), Collabora or Pengutronix. While most of the supported platforms are not ready for conventional mobile use cases yet, some come close and already offer support for a reduced number of use cases.

Signature Verifications Gone Wrong

Despite free software support in Linux, some platforms are fatally flawed when it comes to the boot software they are running. This is the case with enforced boot software signature verifications with missing keys, where the circuitry of the platform validates the digital signature of the first software running at boot with a public key stored in read-only memory. If the signature was not created using the (missing) private key associated with the public key, the platform will simply refuse to boot. A significant number of ARM platforms, such as the ones made by Qualcomm (Snapdragon and others), Amlogic or Samsung (Exynos) are plagued by enforced boot software signature verifications with missing keys. While signature verifications are a really good thing to have, it only makes sense if the user of the device has access to the associated keys. When the keys are kept secret by the manufacturer of the device or the platform, the whole security model of the device is delegated to this third party. The user cannot decide of their own security model and consider potential threats based on their own situation, that may not set this third party as a fully trusted peer. Instead, the implemented security model only covers this third party's threat model, that is designed to consider the user as a threat. While it might sound very odd, this is the result of enforcing technical limitations such as DRM, that are designed to forbid the user from accessing the unlocked raw multimedia contents.

LG Security Error
LG Security Error
This modus operandi looks very similar to what a parent would do to prevent their newborn child from wandering around during night time: applying a technical restriction such as bars on the infant's bed. This is perfectly fine when the individual subjected to the restriction is not able to decide what is good for themselves. But when it comes to individuals with that capacity, imposing such intentional restrictions designed to serve this third party's interests first and foremost feels very inappropriate. But that's not the end of it: this limitation removes the practical possibility for the user to replace the software signed by these missing keys, making it proprietary software de-facto. It sometimes occurs that free copylefted software is used in devices that enforce such signature verifications, as it first occurred with the Tivo (and coined the name Tivoization for the process). Even with source code and the appropriate compilers available, only intermediate forms of software can be obtained. The product form of software (that is the form adapted for practical use) is intrinsically tied to the processing unit that executes the associated software instructions. This is why the practical ability to install the generated intermediate form into the code storage from where the code is executed (effectively making it the product form of the software) is an absolute requirement. When signature verifications are enforced, the part of the installation process related to signing the binaries (so that they can be executed) is missing. The software thus never reaches its product form, leaving the user with freedom over the source and intermediate forms of the software but not over its product form. As a result, the software cannot be considered to be free as a whole, although its license says so.

Current Status of Free Upstream Support

Thankfully, a number of ARM platforms are not flawed in this manner and allow running free boot software. Examples of such platforms include Allwinner, Rockchip, i.MX, OMAP GP and Tegra (without keys burned). U-Boot is widely used on devices with these platforms and manufacturers often use modified downstream versions based off ancient upstream releases. The associated source code for both U-Boot and Linux is not always available, although these projects are covered by a copyleft license. And when it does, it often takes the form of a board support package tarball with no version control information. Some parts of the code required for hardware support can also come missing or be voluntarily moved out of copylefted projects (something that Android seems to vastly encourage). This is often the case with DRAM initialization and features related to multimedia. Nevertheless, these source releases are useful and often allow adding mainline support for these devices.

Kevin OP1
The Google OP1 (Rockchip RK3399) as found on the Samsung Chromebook Plus

At this point in time, the platforms mentioned and many of the devices that use them are supported, to a more or less advanced degree, by upstream free software. Many significant tasks are still ahead of us, but there is work in progress going at a good rate. Support for Vivante GPUs (found in i.MX platforms) was merged and is being improved with time. There is also work in progress for both Mali Utgard, with the revival of the Lima project and Mali Midgard/Bifrost, with the Panfrost project that concerns a number of platforms such as Allwinner and Rockchip. Multimedia features in Linux are also work in progress, with the introduction of the media request API that will allow supporting stateless VPUs (that don't have a dedicated processor, nor require any firmware) in the V4L2 framework, coupled with the MEM2MEM framework that already allows supporting stateful VPUs such as the one found on Exynos and the i.MX6 platform (that require a proprietary loaded firmware). Support for stateless VPUs has been working in progress on the Rockchip side, with a reference driver available on the Chromium OS kernel tree, using an early proposal of the media request API, as well as an early driver from developer Ayaka, in addition to the reference Rockchip kernel driver. On the Tegra side of things, developer digetix from the Grate project has been working on VPU support for early generations of Tegra, that should apply to newer generations with limited effort. On Allwinner, VPU support is work in progress and a number of versions of the Sunxi-Cedrus VPU driver have been submitted already.

A10 Tablet XFCE
Innards of an i.MX6 CuBox-i
In addition to VPUs support, a number of image processing units are made available (especially so on Rockchip) with features ranging from colorspace conversion, and scaling to 2D operations like bit blit and porter-duff alpha blending. Some of this processing is useful for general-purpose image processing, like colorspace conversion and scaling and seems relevant to the V4L2 subsystem, while 2D operations seem more relevant for the DRM subsystem. There seems not to be much support for 2D operations acceleration through generic DRM interfaces in regular userspace implementations from freedesktop.org. Support for such acceleration in compositors would allow significantly speeding up these operations without resorting to the GPU. Since GPUs in graphics cards are often required for video features in x86 systems, it has become a common assumption that GPU support is available for all types of graphical operations. Needless to say that GPU support on ARM is still work in progress and that it is currently generally not available without proprietary software (especially on the many platforms that embed Mali or PowerVR GPUs). Moreover, using the GPU is not power-efficient compared to dedicated hardware components and sometimes does not even perform as fast.

However, the DRM subsystem already leverages plane overlays, that each have their own framebuffer and position on the CRTC and are blended by the hardware before hitting the encoder, as a single frame. But planes support in Xorg is currently not as easy thing to implement, especially when it comes to showing YUV 420 video frames in a dedicated plane. The historical approach for supporting this use case consists in setting a color key on the display hardware, that sets a particular color to alpha, and blend the result with the video frame, that integrates with the rest of the display contents. One downside is that the coloured box bounding the video plane has to follow the movements of the window displaying it, so some coordination is required. This coordination as well as color-keying was historically provided by the Xv extension to the X server. Alas, this extension relies on buffer copies, introducing a significant bottleneck in performance and sharing buffers directly is a much preferred approach nowadays (that wade possible with DMABUF handles). Thankfully, DRI3 and Wayland bring solutions to the issues found on Xorg.

DRM Plane BBB
Big Buck Bunny shown in an overlay DRM plane

Other features related to multimedia are slowly picking up support in Linux, with support for media pipelines and camera sensor input controllers. Other specific features also require significant effort, such as proper power management (a crucial feature on mobile devices). With all these developments happening, the idea of running mobile devices on upstream free software with a significant set of features allowing usual daily use cases slowly becomes a reality. These days, embedded mobile devices only seem to get more and more diversified: from laptops to phones and tablets, they are nowadays found in all sorts of form factors such as convertible laptops, single-board computers, mini PCs, dongles, home automation equipment or even connected surveillance, car and action cameras. All these new use cases are running with wild downstream software that causes a great deal of issues in terms of freedom and privacy/security. The proposed solution to allowing a somewhat sane use of this technology clearly has to start with upstream software support. Some of these products and use cases are not necessarily a good thing to have on a society-wide scale, but provided that they do exist, we might as well try and sanitize the technology that supports these new use cases.

Covering a New Interesting Use Case

When going through some of the latest released devices on Chinese marketplaces like Aliexpress or GearBest, I noticed a number of 4K action cameras featuring Wi-Fi and HDMI. With an ever-renewed interest for finding out about the guts of unusual devices (well, come to think of it, of usual devices too), I ran through the specs and found that the SoC is an Allwinner V3. One that probably has the VPU that I am writing support for. One that also has an 8M pixel sensor and a wide-angle lense attached, screaming to be piped to the VPU's encoder. But unfortunately, one that is not yet supported in upstream U-Boot and Linux. Thankfully, the community was quick to pick-up support for the V3s SoC from Allwinner. It was contributed to mainline U-Boot and Linux by linux-sunxi community member Icenowy, who has also designed a board using the V3s. The V3s is an integrated LQFP package grouping 64 MiB of DDR2 DRAM and the same processor and controllers as Allwinner's V3 SoC.

A10 Tablet XFCE
Allwinner V3 SoC
In the past, Allwinner has already released chips with minor variations under different names, such as the A10s matching the A13 or the R16 matching the A33. These chips share nearly the exact same set of registers as the platforms they are based on (if not the very same), so when it came to adding support for them in U-Boot and Linux, only a minimal amount of work was required. With V3s support in good shape in free upstream hardware support projects, adding bringing up the V3 based on the existing V3 support with a device in hands looks like a rather reasonable task.

And so I clicked and typed and clicked and typed until I received a little package with my name on it and a F60 Action Camera inside. Next up in the process: discovering and documenting the components used in the camera, starting with a hardware teardown!

An incentive for liberating computers: my own use case

Written by Paul Kocialkowski 8 comments

Over the past months, I've been looking at moving my computing setups to more freedom-respecting ones. Currently, both the laptop and desktop that I use for doing serious work (read: writing code) are based on recent Intel platforms and run with a proprietary UEFI/BIOS. I initially made that choice to be able to run a fully free system, such as Trisquel, since Intel platforms come with fully free GPU drivers that don't require additional non-free firmwares and are officially supported by the manufacturer. Support for most other aspects of the hardware is also close to being flawless, so this makes the whole user experience really nice. However, it goes without saying that things aren't actually all that pretty, given that there is still a proprietary UEFI/BIOS in there, CPU micro-code updates, firmwares for e.g. the USB3 (xHCI) controller (all of which are pushed by the UEFI/BIOS) as well as a management engine (ME) and Intel's active management technology (AMT). So it all looks pretty from an operating system perspective, but things aren't at all pretty under the hood. All that brings serious concerns for privacy/security, but this is not really what I am the most concerned about, personally.

Recently, I got to appreciate running with a fully free bootloader when working on various aspects of U-Boot, the popular free software bootloader that is widely used on embedded devices, when freeing some of those devices “from the ground up”. However, I am not really using any of those devices (e.g. single board computers) that I am working on, since they simply aren't a good fit for doing any serious work on them. I tried using some as home-theater PC, but the state of the art of free software support doesn't allow that use case yet. Hopefully, we will one day be able to run Kodi on those: there are promising leads on the i.MX6 platform with Etnaviv and hardware-accelerated video decoding, despite using a proprietary firmware. On the other hand, Allwinner platforms have some hardware video decoding support, both reverse engineered by the sunxi community and released by Allwinner, but I couldn't get it to work sufficiently well to allow watching a full 720p movie. Perhaps the lack of proper graphics acceleration is to blame here, I can't really tell (and there is already some 2D acceleration for Xorg). Qualcomm platforms do have some nice graphics acceleration and 3D support thanks to Rob Clark's work on freedreno, but those platforms don't allow running a free bootloader since the bootrom enforces a signature check on it, which is a no-go for me. On the other hand, I recently found out that using mesa and llvmpipe on the most powerful devices does bring a significant change, but it is still not packaged with Debian on armhf!

Still, those boards remain good for some other use cases, such as powering the servers that I use for hosting various services that I use on the Internet. This is actually one of the reasons why I got involved in working with embedded hardware, and those devices are still as good a fit as they were back when I started playing around with them. In addition, now that I started freeing mobile devices such as the LG Optimus Black (P970) and Allwinner tablets, I may also use those with Replicant. I got used to using the Galaxy Note 2 (N7100), that has a proprietary and signed bootloader, as my main phone after a year or so of daily use, but it's probably not too late to switch to a more freedom-respecting one, even if I'll probably miss some aspects of the big Samsung device. Either way, using the device I'm currently working on is one of the best ways to ensure that it's actually usable.

So this opens doors for liberating some aspects of my use of computing, but the computers I am using the most daily, my laptop and desktop, still remain fatally flawed. I have been looking at the list of devices supported by Coreboot for a long time and now that Libreboot came around, it's even easier to get an idea of what laptops and mainboards can run with fully free software, or close. At this point, the laptops supported by Libreboot are simply too old to fit my use case. I need something that can handle building a Replicant image in a decent time, that is an hour or so, without running out of memory. Thus, I wondered what could come closest to being fully free, both regarding software executed on the main processor and firmwares running in separate chips. Among the most recent boards supported by Coreboot, I decided to skip the Intel ones, since their ME is nearly impossible to disable or liberate (it is signed). In addition, not all of those have free native DRAM initialization and free video BIOS support, despite developers' truly great efforts there. Thus, I started looking at boards based on AMD platforms, the other half of x86 platforms as they exist today. A little while ago, AMD made a nice move forward by freeing their AGESA BIOS reference code for inclusion in Coreboot, which supports recent chipsets (I was told they have decided to stop contributing, though). The code itself is a nightmare to work with and the fact that it's used as-is in Coreboot doesn't make development a particularly fun time, but at least, it's there. And it allowed developers to add support for a few interesting boards that are rather recent. A few interesting desktop motherboards are there and one particularly caught my attention: the F2A85-M.
At the time of writing, a slightly different version of it, the F2A85-M PRO is still being sold brand new in (French) online shops, so it's very easy to get. Former Replicant developer GNUtoo and I decided to get one each and get Coreboot running on it as soon as possible. Apparently, someone already attempted that port in the past, but gave up without publishing all the work in progress patches. Only code for the Super I/O (that is different from the non-PRO version of the board) was found, so we still have to figure out what the other differences between the F2A85-M and the F2A85-M PRO are to properly support it in Coreboot. Getting in touch with the original developer who gave up could come-in handy for this.

Among the boards supported by Coreboot (thanks to the AMD AGESA code) is a laptop matching my expectations: the Lenovo G505s. It is somewhat similar to the F2A85-M, only that it trades its Super I/O for an embedded controller, a better fit for a laptop and its required power management constraints. Both of these come with a Radeon GPU inside the CPU (and if I got it right, some versions of the G505s also have another Radeon GPU on-board), which also holds the northbridge by the way. At this point, the Radeon GPU cannot be used out of the box without both a non-free video BIOS (that is a blob executing instructions to set up the video card at UEFI/BIOS time) and a non-free firmware. However, that situation could probably be improved.

Since we're stuck with the Radeon card on the G505s, there is very little choice but to use the video BIOS (it also holds necessary bits for the radeon driver to work). One might also prefer not to use the proprietary firmware that runs in the GPU and avoid the radeon driver at all, but this will end up in using VBE (VESA BIOS extensions), that callbacks to the video BIOS in the end. Of course, on such a powerful laptop, using llvmpipe instead of radeon as mesa backend can be painless for many use cases.

On the other hand, the F2A85-M has full PCI-e ports, so one can plug-in an external nVidia card that is supported by nouveau, the free software graphics driver for those. Display support in Coreboot still requires the non-free video BIOS (and it also has some bits that are needed by nouveau). In addition, the nouveau driver also requires a firmware to run on the card, but it was freed for the card that I decided to settle for, a GeForce 610 with 2 GiB RAM. Early tests report that it should cope just fine with the things I do on that desktop computer (gnome-shell, flightgear and some more).

In addition to full-size desktop and laptops, I also got myself interested in smaller (and more traveler-friendly) form factors on which I can still do significant work. I have been looking at Chromebooks for some time now, especially because they ship with Coreboot and free software on the embedded controller, which is quite unique. However, up until recently, all the Chromebooks needed proprietary software to boot up (for various bits on Intel x86 platforms or because the bootrom was enforcing a signature check with a manufacturer key that cannot be replaced on ARM platforms). However, Google recently released a batch of Chromebooks based on the Rockchip RK3288 SoC (the veyron family), that is known (thanks to the rockchip community and the various makers of community-friendly hardware based on Rockchip chips) to not enforce such signature checks. Thus, it allows running a free bootloader as early as possible. The C201 Chromebook by Asus was released a few months ago with a RK3288 SoC, so I decided to get one of those and see what we can do with it. The goal is to port Libreboot to it and so far, the results have been very positive: that's the machine that I am currently using to type this post and it's running with fully free software from the bootloader (Coreboot) up to the operating system. All the micro-controllers I'm using on it are also running free software (that is, using an external ath9k_htc Wi-Fi dongle). The security model implemented is truly great, kudos to the Chromium OS developers for it! I was indeed able to replace the keys inside the RO part of the SPI flash memory, sign a kernel with my own keys and have a verified boot setup that way!

All that stuff keep me busy (and sadly, makes me way behind on Replicant-related work), so stay tuned for more details on specific aspects of those things!

RMLL 2015 debriefing

Written by Paul Kocialkowski 4 comments

This year's edition of RMLL/LSM, the free software conference that travels in and out of France (with an international aim) just ended. Time to take a step back and look at what happened during the 4/5 days I was there.

Thankfully, I get to travel to such conferences using money from the Replicant fund, so I will be refunded both my train tickets and my stay this time again. It makes it much easier (and to be honest, possible at all) for me to attend such conferences. This way, I don't have to worry about finding a summer job and can instead focus on what I do best, reverse engineering proprietary stuff and writing (free) replacement code.

Monday

This time, I arrived on Monday afternoon and could attend a first talk after a quick chat with the lovely people from the information booth. The talk, that was part of the security track, was presented by Lunar (Tor and Debian developer) and reported the current state of the art of reproducible builds for Debian (and more). It was really nice to see such overwhelming progress accomplished, after I attended the initial talk during which he announced the reproducible builds initiative a year and a half back, at FOSDEM. Lunar's talk answered most of the questions I had regarding how to make software reproducible. I am especially interested in making the U-Boot bootloader reproducible. I had that idea at the back of my head for some time now and decided to jump in after seeing a contribution in that direction on the U-Boot mailing list. Eventually, we managed to get some of that work done (right) later in the week. The rest of the afternoon was filled with chatting around in the village. In the evening, I met people from the event at a local bar, were free music was being played. It was a nice atmosphere and we had some interesting technical discussions (and let's be honest, many trolls as well)! I was thrilled to see that people were not only aware of Replicant, but also had a lot of interest in it.

Tuesday

On Tuesday, it was time to get to the workshop I was supposed to co-host. The whole day was filled with various activities around different kinds of embedded devices (some were about scientific measurements, some about Arduino, etc). In addition, most of these were built with education in mind. When the first one ended, it was time for me to leave in order to reach the room where I was to present my first talk. The video recording seemed to be done right and hopefully, the video of the whole thing will be available eventually. Not that many people showed up, but the ones that were there seemed really interested. I got to meet and talk with a few people after my presentation, some of whom decided to come to RMLL only to have a chat with me. What a surprise! The afternoon went on and I attended a few talks, including a round table around the concept of civilian re-appropriation. It was presented by Veronique Bonnet, who's a philosopher and a member of April, one of the French associations that take a stand for freedom on digital devices (and actually get it right). Richard Stallman (RMS) was also there, even though he apparently didn't quite understand the wording of the subject in French. Still, some interesting things were said and RMS displayed his usual sense of humour here and there, sometimes making the audience burst into laughter. Once it was over, we got to chat a bit, in a very friendly environment, which was very nice. A free music concert was organized near the event, so a few youngsters (including myself) decided to go before calling it a day.

Wednesday

Wednesday was the occasion for me to be around the workshop more often, but very few people showed up because it was missing from the printed schedule, something I only came around to realize once it was too late, a week before the event or so. Despite some paper indications and the addition of the workshop to the online program, the place remained rather quiet, which wasn't so much of a problem given my aggravating state of sleep deprivation. Before lunch, I gave my other talk about Replicant, a longer and much more technical one. To my surprise, many more people showed up (perhaps the result of meeting a few people during the first few days). The talk itself went well and everything fit on schedule. For the record, the content of both talks (which summed up to 1 hour and 40 minutes, mostly excluding questions) was what I had planned on delivering during my (50-minute long) talk at FOSDEM this year: no wonder I had to stop half-way back then! Afterwards, I was lucky to get help for making U-Boot reproducible from Lunar, whose efficiency, vivacity and kindness really made the task painless. There are still bits and pieces to bring together to craft a proper patch, but I'll get around doing it sooner or later. After alternating between the workshop and talking to great people at the village, I ended up meeting back lots of interesting people at a Harry Potter-themed bar, le Chaudron Baveur (not that the owner deserves any particular good word about it, given that he wasn't exactly pleasant).

Thursday

The next day went on pretty much similarly, except that I had no talk left to give, and thus no particular pressure or place to be at (except for the workshop, that remained desperately empty). Just like any other day at RMLL, I met tons of incredible people and had lots of interesting talks. In the afternoon, the main “political” event of the week took place, with a round table regarding interoperability and DRMs. The speakers were a high-ranking official from HADOPI and Marie Duponchelle, who conducted a thesis on the very subject. Overall, it was very strange, mostly because the nature of the debate soon revealed to be astonishingly stupid and a pure waste of time. The main question was how to allow the entertainment industry to use DRMs while maintaining interoperability. The answer is plain and simple: it can't be done. Despite that very clear statement, that was introduced eventually by Marie Duponchelle (in spite of the situation Videolan was in), the debate went on and the HADOPI representative produced vague statements with apparently no ties to the technical reality one after the other. At some point, the audience got pissed off and started expressing our community's point of view in very clear ways, such as encouraging everyone to share culture in the most efficient ways: torrent, VPNs and Tor. All that followed by rounds of applause, naturally. More serious questions were raised, such as the existence of public domain in practice when only copies of an piece of art exist with DRMs. The HADOPI representative answered that any piece of art is itself distinct from the media it is distributed on, which may be a fair point, but doesn't solve anything. She also suggested that the BNF could receive non-protected copies of it, but this is neither its mission nor a reliable solution for people who will find a DRM-tainted copy decades later, unable to read it despite the fact it is in public domain. The talk ended with François Revol (Haiku developer) handing over a big coin of 1 Hadopi to the representative, a way to show our community's support for this organism at a time of budget cuts. Bottomline: this was purely a waste a time (despite providing some form of entertainment). No wonder some decided to master the fine art of origami during the talk instead of listening to that whole mess. Hopefully, the main political talk will prove to be more interesting next year. In any case, it probably cannot sink much lower. Later that day was the repas du libre, the traditional classy-ish dinner where we all meet together and look back at the week (everybody knows Friday is mostly for getting over the hangover induced by the previous night's drinking and also for packing). I didn't plan on attending at first, since the food wasn't really worth it last year, but changed my mind given some pretty solid arguments. Or maybe just pretty at all. In any case, I got to formally meet Benjamin Bayart (some fine blood forensics can probably assess for that) who not only showed interest in Replicant (and other things I'm doing these days) but offered me his help in every way possible. That evening is probably the time I had the most fun at RMLL, thanks to Benjamin, Fabien, Frédéric, jfefe and plenty others. Kudos to them for their support in times of great needs, that was a relief. Thankfully, my LG Optimus Black (P970) booted just fine, so in the end, it's fair to say that the various issues encountered were accounted for and that the whole thing provided a working result, that will certainly become a base for future developments, now that the initial trouble is behind us.

Friday

Friday was a bit less fun than the other days, in part for reasons of a physical nature. I still managed to reach the event in time to be reminded that Trinity does use nmap and it's fair to say that it's the coolest thing. Sadly, some people had to leave early and I couldn't conclude some of the ongoing arguments that had developed throughout the week. Hopefully, there will be other occasions to meet (and certainly closer than Beauvais), but that's ultimately not really up to me, despite my best intentions.

A hacker's journey: freeing a phone from the ground up, fourth part

Written by Paul Kocialkowski 5 comments

That whole I2C issue took me close to two months to resolve. As I recall, probing finally worked on my last day of summer vacation. I was so happy to finally have figured the issue out, that over the next week or so, I cleaned-up the X-Loader code and got it to a state where it could load LK (the only working second-stage bootloader binary I had at disposal) from the external sdcard.

U-Boot

A few months went by, I started working on other things, like Allwinner devices, on my ever-shrinking amount of spare time. At some point, I decided I needed to get back to it and properly port U-Boot. Of course, we're talking about upstream U-Boot here, as I'm not a big fan of either fighting with the code LG released (that is old, poorly written and contains a lot of dead code) or adapting versions of U-Boot code coming from TI. Working with upstream has countless advantages, among which I see an opportunity to get familiar with the current state of the U-Boot code as well as going through the process of having my code reviewed by peers, which is always a very enriching experience. It ensures that code is written the right way, fits well in the overall design and doesn't break something else. The temptation of doing easy and nasty changes to suit our case better is big when working on our own personal fork of such a big project and that's exactly what I wanted to avoid. Finally, having the phone in my pocked supported by upstream is pretty cool, right?

As I first dig into the U-Boot code, it struck me that an increasing number of devices are using the U-Boot SPL as a first stage loader. The U-Boot SPL is a minimalistic build of U-Boot that only contains the minimal amount of code to be able to chainload the usual and complete U-Boot binary. There is SPL support for the OMAP3, so I thought I might as well give it a try, since X-Loader is an old copy of deprecated U-Boot code anyways. It didn't take me long to get it to run, with the usual base address correction due to the fact that I'm using peripheral booting. I will most certainly drop X-Loader and go with U-Boot and the U-Boot SPL for the future.

Current status and overlook of the future

U-Boot running
U-Boot running
To this day, I have submitted a handful of patches upstream to add support for various features that I need for the P970 (named sniper in the code) and most of those were accepted already. The rest of it is available on my personal git repository for the Optimus Black (P970) codename sniper's U-Boot port. I am still working on getting board support ready with a certain amount of features, like fastboot support (including eMMC flashing) and being able to get the SPL to boot U-Boot from the external sdcard on demand, to allow failsafe U-Boot development without having to unsolder a resistor.
A few others things are needed in order to be able to load and run kernels painlessly: currently, booting Linux is working but a few features appear to be broken.

The next steps in this journey are the following: getting board support ready and accepted in upstream U-Boot and then starting the Replicant port. It shouldn't be too much work to get Replicant running on the device, so I'm confident it will come around. Of course, I will properly document the device on the Replicant wiki at some point in the near future. I am also hoping to talk about this device at FOSDEM this year, among other exciting new things that I've been doing recently.

And once upstream U-Boot support is merged, why stop there? I could just as well try and get some Linux support for it upstream: that could be an interesting and challenging task, despite not being crucial for software freedom. And of course, once everything is properly documented, everyone is welcome to join in and help!

Finally, during all this, I've come to learn about a few other mainstream OMAP GP devices (that I will also document at some point), such as the first generation Kindle Fire from Amazon, which is really likely to become my next target for Replicant after the Optimus Black. Thanks to the great work already achieved by developer Hashcode, there is already an U-Boot port available, and instructions on how to get UART from the device are also available!

Conclusion

Overall, I'm definitely very happy with the way this whole experience went. It most certainly brought me a lot regarding how low-level initialization and bootloaders work and hopefully offered me a chance to make a real difference for those of us who care about having free bootloaders running on the devices in our pockets, in addition to a free system. The odds also seem to be in our favor when it comes to modem isolation, so the whole device is pretty close to being “as good as it gets” today. To be fair, everything is not exactly great about it: graphics acceleration and 3D will still be proprietary, the GPS protocol is still unknown and the video decoding unit (DSP) as well as Wi-Fi and bluetooth require loaded proprietary firmwares.

As a final word, I'd like to thank whoever decided to go with the GP version of the OMAP3630 at LG. Heck, that decision brought me the hell of a ride!



This post is part of a series of articles about freeing the LG Optimus Black (P970):

A hacker's journey: freeing a phone from the ground up, third part

Written by Paul Kocialkowski no comments

With all that soldering successfully performed on my LG Optimus Black (P970), I was finally able to load my own code and actually see the results, on UART.

Building and running

X-Loader
X-Loader
I first tried to build the X-Loader source code that was released by LG and, after addressing some relocation issues, got it to finally print something on serial! The first step was to get X-Loader to run on the device, to a point where it could load and run U-Boot. I had at disposal the version released by LG, which is not exactly the most recent one out there, and found various other versions on the Internet. Overall, I was told that X-Loader development is a bit of a mess: it was initiated by TI for OMAP devices, but there is no clear notion of upstream for it, just different branches that work on different OMAP devices. The closest thing I could find was the X-Loader project on Gitorious, which was an initiative to gather all the various X-Loader trees into one, as I was told. That felt like the right path to follow, so I decided to use it as a base for development for the Optimus Black (P970) codename sniper's X-Loader port. Most things worked nicely by importing code from the LG release, even RAM initialization was rather painless and worked straight away. Soon enough, I was at a point where my concern was loading and running the next stage bootloader: U-Boot.

Loading the next stage

The Optimus Black (P970)'s internal storage is eMMC. There is a partition for X-Loader and one for U-Boot too. As I wanted to keep the devices in an usable state (a working shell on a kernel that actually works can turn out to be an important resource when working close to hardware), I decided to avoid flashing the internal memory. And flashing requires the current system to work, so that would only have worked once. Not really a possibility. Of course, I would still need to be able to boot from internal memory for production, once everything is known to work. Internal storage is connected to the OMAP3's MMC2 controller while the external sdcard is connected to MMC1. The current X-Loader code would only deal with MMC1, so I decided to give booting from internal memory a shot. After all, there was already a bootloader in there (LK) and my X-Loader code should have been able to boot it as well. Given that the bootrom has to be able to boot directly from MMC2 (it is the first boot option), everything is wired according to the TRM: that is, using the appropriate TWL5030 regulators. Thanks to that, the code in X-Loader was able to use the eMMC without any change and it would soon load and run the preinstalled LK.

Booting from the external sdcard

Real trouble began when I started looking into getting X-Loader to read from MMC1 (the external sdcard). Modifying X-Loader so that it would also allow booting from MMC1 wasn't enough to do the trick. As it turned out, MMC1 is powered by an external PMIC, the LP8720: one of its LDOs is dedicated to powering MMC1 (3.0V_MMC). Looking at the schematics, it appeared that the LP8720 is contacted through I2C3. Thankfully, it had good enough documentation available (in addition to the reference code from LG), so it was really easy to figure out how it works. X-Loader's I2C code wasn't designed to allow changing the bus either, so I had to implement that too. With the bus switched to I2C3, I should have been able to communicate with the device. Except that it didn't work. I spent a crazy amount of time looking at the TRM, making sure that every necessary clock was enabled, every needed pin muxed properly, but it didn't do. The controller would only return the same error. Nevertheless, it worked perfectly on the preinstalled Android version and other buses were working normally too. It just didn't seem to make sense and after a week or so, this got me very frustrated.

Frustration and investigation

Spare board
Spare board
At some point, I even wrote an I2C probe function from scratch based on what the TRM recommends, with no better result than the original code. Looking at it more closely, it seemed that the controller wasn't getting any NACK. I was able to reproduce the same behavior by disabling input on another I2C bus, or by muxing the pins to GPIO. Still, it didn't make sense that it happened all the time on I2C3, so I asked on the TI E2E community forums but didn't really get a clue there. Since I had another working device at disposal, I decided to tear its board off, keep only the USB connector, load my code and look with a probe at what was actually going on on the bus. Lacking proper hardware, I used my buspirate as a logic analyzer but it didn't really work out, neither did my Arduino Uno. It was also particularly hard as I had to probe on two very tiny resistors to get I2C3's SCL and SDA lines. I decided to head back to LabX, where I knew that people would have a clue about how to do this properly. It turned out that nouveau developer Martin Peres was around at that time and kindly offered to bring his own digital oscilloscope. That was really a life saver! After a short while, we were able to probe the SCL and SDA lines and see things actually going on in there. A few times, we were even able to catch a full trace with SDA and SCL simultaneously. Martin knew the I2C protocol well, so we were able to decode the transaction, only to find out that everything was going on just fine. NACK was there and I just couldn't understand why the controller wasn't getting it.

I2C3 trace
I2C3 trace

GPIO to the rescue!

Probing the spare board
Probing the spare board
In the meantime, while I was discussing the issue with fellow developer GNUtoo, he suggested that I mux the pins to GPIO to see what reads. The device with UART attached did show 1 for GPIO read on I2C1 SDA and SCL lines but showed 1 on SDA and 0 on SCL on I2C3. It was as if the SCL line wasn't pulled high as it should have been. I could not actually measure the tension there, as opening up the case would tear off my UART setup. On the spare board, it was clear that SCL was pulled high by reading the voltage, so the 0 GPIO read did seem like the I2C controller on the OMAP wasn't reading the values properly. It still worked normally on Android though, so this was clearly not a hardware problem. That's when I decided to take an extra step: checking that the spare board actually read 0 like the board with UART did. Since the spare board didn't have UART, I needed some other way to read back the values. The most straightforward solution was to use the same I2C3 pins, read the boolean value, then configure the GPIO to output and write back the opposite of the read value (so that I would clearly see the difference).

Sub-board connected
Sub-board connected
And it turned out that reading those values with my voltmeter indicated that the spare board was reading a logical 1 on I2C3 SCL. Thus, my two devices were behaving in two different ways. Moreover, it was likely that I2C3 had been working correctly on the spare board since the very beginning. Now what is the difference between those boards? The most obvious thing is that one is loaded with all the sub PCBs while the other one had only the USB connector attached, which would allow me to probe the I2C3 resistors. Regarding the I2C3 bus, there were indeed a handful of slave devices attached to it on a sub-board that I removed, so I plugged the sub-board back and read the GPIO values again. It then indicated 1 for SDA and 0 for SCL! So the differences between both setups were apparently caused by that sub-board, and something in there was bringing SCL to 0. The slave devices were mostly sensors and looking at the schematics revealed that they were using dedicated regulators. After a quick check, it appeared that 1.8V_MOTION_VIO and 3.0V_MOTION were possibly not enabled. Those are LDOs from the TWL5030, so I just thought I would give a shot at powering them up. Right after that change, the GPIO read on the UART-enabled devices turned to 1 for SCL! Another build to switch the muxing back to the I2C controller and I could finally probe the LP8720 regulator correctly!



This post is part of a series of articles about freeing the LG Optimus Black (P970):

Rss feed of the articles