Mastering Embedded Linux Programming: Selecting a Build System (5/16)



Selecting a Build System




In the preceding's, we covered the four elements of embedded Linux and showed you step-by-step how to build a toolchain, a bootloader, a kernel, a root filesystem, and then combined them into a basic embedded Linux system. And there are a lot of steps! Now, it is time to look at ways to simplify the process by automating it as much as possible. I will look at how embedded build systems can help and look at two of them in particular: Buildroot and the Yocto Project. Both are complex and flexible tools, which would require an entire  to describe fully how they work. In this , I only want to show you the general ideas behind build systems. I will show you how to build a simple device image to get an overall feel of the system, and then how to make some useful changes using the Nova board example from the previous s.

In this , we will cover the following topics:

Build systems

Package formats and package managers
Buildroot
The Yocto Project



Build systems


I have described the process of creating a system manually, as described in, Building a Root Filesystem, as the Roll Your Own (RYO) process. It has the advantage that you are in complete control of the software, and you can tailor it to do anything you like. If you want it to do something truly odd but innovative, or if you want to reduce the memory footprint to the smallest size possible, RYO is the way to go. But, in the vast majority of situations, building manually is a waste of time and produces inferior, unmaintainable systems.

The idea of a build system is to automate all the steps I have described up to this point. A build system should be able to build, from upstream source code, some or all of the following:

A toolchain

A bootloader
A kernel
A root filesystem

Building from upstream source code is important for a number of reasons. It means that you have peace of mind that you can rebuild at any time, without external dependencies. It also means that you have the source code for debugging and also that you can meet your license requirements to distribute the code to users where necessary.

Therefore, to do its job, a build system has to be able to do the following:

1.    Download the source code from upstream, either directly from the source code control system or as an archive, and cache it locally.

2.    Apply patches to enable cross compilation, fix architecture-dependent bugs, apply local configuration policies, and so on.

3.    Build the various components.

4.    Create a staging area and assemble a root filesystem.

5.    Create image files in various formats ready to be loaded onto the target.

Other things that are useful are as follows:


1.    Add your own packages containing, for example, applications or kernel changes.

2.    Select various root filesystem profiles: large or small, with and without graphics or other features.

3.    Create a standalone SDK that you can distribute to other developers so that they don't have to install the complete build system.

4.    Track which open source licenses are used by the various packages you have selected.

5.    Have a user-friendly user interface.

In all cases, they encapsulate the components of a system into packages, some for the host and some for the target. Each package is defined by a set of rules to get the source, build it, and install the results in the correct location. There are dependencies between the packages and a build mechanism to resolve the dependencies and build the set of packages required.

Open source build systems have matured considerably over the last few years.

There are many around, including the following:



EmbToolkit: This is a simple system for generating root filesystems; the only one so far that supports LLVM/Clang out of the box (https://www.embtoolki t.org)
OpenEmbedded: This is a powerful system, which is also a core component of the Yocto Project and others (http://openembedded.org) OpenWrt: This is a build tool oriented towards building firmware for wireless routers (https://openwrt.org)
PTXdist: This is an open source build system sponsored by Pengutronix


The Yocto Project: This extends the OpenEmbedded core with metadata,



I will concentrate on two of these: Buildroot and the Yocto Project. They approach the problem in different ways and with different objectives.

Buildroot has the primary aim of building root filesystem images, hence the


name, although it can build bootloader and kernel images as well. It is easy to install and configure and generates target images quickly.

The Yocto Project, on the other hand, is more general in the way it defines the target system, and so it can build fairly complex embedded devices. Every component is generated as a binary package, by default, using the RPM format, and then the packages are combined together to make the filesystem image. Furthermore, you can install a package manager in the filesystem image, which allows you to update packages at runtime. In other words, when you build with the Yocto Project, you are, in effect, creating your own custom Linux distribution.



Package formats and package managers

Mainstream Linux distributions are, in most cases, constructed from collections of binary (precompiled) packages in either RPM or DEB format. RPM stands for the Red Hat package manager and is used in Red Hat, Suse, Fedora, and other distributions based on them. Debian and Debian-derived distributions, including Ubuntu and Mint, use the Debian package manager format, DEB. In addition, there is a light-weight format specific to embedded devices known as the Itsy package format or IPK, which is based on DEB.

The ability to include a package manager on the device is one of the big differentiators between build systems. Once you have a package manager on the target device, you have an easy path to deploy new packages to it and to update the existing ones. I will talk about the implications of this in  8, Updating Software in the Field.



Buildroot


The Buildroot project website is at http://buildroot.org.

The current versions of Buildroot are capable of building a toolchain, a bootloader, a kernel, and a root filesystem. It uses GNU make as the principal build tool. There is good online documentation at http://buildroot.org/docs.html,

including The Buildroot user manual at https://buildroot.org/downloads/manual/manual.html.



Background


Buildroot was one of the first build systems. It began as part of the uClinux and uClibc projects as a way of generating a small root filesystem for testing. It became a separate project in late 2001 and continued to evolve through to 2006, after which it went into a rather dormant phase. However, since 2009, when Peter Korsgaard took over stewardship, it has been developing rapidly, adding support for glibc based toolchains and a greatly increased number of packages and target boards.

As a matter of interest, Buildroot is also the ancestor of another popular build system, OpenWrt (http://wiki.openwrt.org), which forked from Buildroot around 2004. The primary focus of OpenWrt is to produce software for wireless routers, and so the package mix is oriented toward the networking infrastructure. It also has a runtime package manager using the IPK format so that a device can be updated or upgraded without a complete reflash of the image. However, Buildroot and OpenWrt have diverged to such an extent that they are now almost completely different build systems. Packages built with one are not compatible with the other.



Stable releases and long-term support


The Buildroot developers produce stable releases four times a year, in February, May, August, and November. They are marked by git tags of the form: <year>.02, <year>.05, <year>.08, and <year>.11. From time to time, a release is marked for Long Term Support (LTS), which means that there will be point releases to fix security and other important bugs for 12 months after the initial release. The 2017.02 release is the first to receive the LTS label.



Installing


As usual, you can install Buildroot either by cloning the repository or downloading an archive. Here is an example of obtaining version 2017.02.1, which was the latest stable version at the time of writing:

$   git clone git://git.buildroot.net/buildroot -b 2017.02.1

$   cd buildroot

The equivalent TAR archive is available at http://buildroot.org/downloads.

Next, you should read the section titled System requirement from The

Buildroot user manual available at http://buildroot.org/downloads/manual/manual.html, and

make sure that you have installed all the packages listed there.



Configuring


Buildroot uses the kernel Kconfig/Kbuild mechanism, which I described in the section Understanding kernel configuration in  4, Configuring and Building the Kernel. You can configure Buildroot from scratch directly using make menuconfig (xconfig or gconfig), or you can choose one of the 100+ configurations for various development boards and the QEMU emulator, which you can find stored in the directory, configs/. Typing make list-defconfigs lists all the default configurations.

Let's begin by building a default configuration that you can run on the ARM QEMU emulator:

$ cd buildroot

$   make qemu_arm_versatile_defconfig

$   make

You do not tell make how many parallel jobs to run with a -j option: Buildroot will make optimum use of your CPUs all by itself. If you want to limit the number of jobs, you can run make menuconfig and look under the Build options.

The build will take half an hour to an hour or more depending on the capabilities of your host system and the speed of your link to the internet. It will download approximately 220 MiB of code and will consume about 3.5 GiB of disk space. When it is complete, you will find that two new directories have been created:

dl/: This contains archives of the upstream projects that Buildroot has built

output/: This contains all the intermediate and final compiled resources

You will see the following in output/:

build/: Here, you will find the build directory for each component.

host/: This contains various tools required by Buildroot that run on the host, including the executables of the toolchain (in output/host/usr/bin).
images/: This is the most important of all since it contains the results of the build. Depending on what you selected when configuring, you will find a bootloader, a kernel, and one or more root filesystem images.


staging/: This is a symbolic link to the sysroot of the toolchain. The name of the link is a little confusing, because it does not point to a staging area as I defined it in  5, Building a Root Filesystem.
target/: This is the staging area for the root directory. Note that you cannot use it as a root filesystem as it stands because the file ownership and the permissions are not set correctly. Buildroot uses a device table, as described in the previous , to set ownership and permissions when the filesystem image is created in the image/ directory.



Running


Some of the sample configurations have a corresponding entry in the directory board/, which contains custom configuration files and information about installing the results on the target. In the case of the system you have just built, the relevant file is board/qemu/arm-versatile/readme.txt, which tells you how to start QEMU with this target. Assuming that you have already installed qemu-system-arm as described in  1, Starting Out, you can run it using this command:

$   qemu-system-arm -M versatilepb -m 256 \ -kernel output/images/zImage \
-dtb output/images/versatile-pb.dtb \

-drive file=output/images/rootfs.ext2,if=scsi,format=raw \ -append "root=/dev/sda console=ttyAMA0,115200" \ -serial stdio -net nic,model=rtl8139 -net user


There is a script named MELP/_06/run-qemu-buildroot.sh in the  code archive, which includes that command. When QEMU boots up, you should see the kernel boot messages appear in the same terminal window where you started QEMU, followed by a login prompt:

Booting Linux on physical CPU 0x0

Linux version 4.9.6 (chris@chris-xps) (gcc version 5.4.0

(Buildroot 2017.02.1) ) #1 Tue Apr 18 10:30:03 BST 2017

CPU: ARM926EJ-S [41069265] revision 5 (ARMv5TEJ), cr=00093177

[...]

VFS: Mounted root (ext2 filesystem) readonly on device 8:0.

devtmpfs: mounted

Freeing unused kernel memory: 132K (c042f000 - c0450000) This architecture does not have kernel memory protection.

EXT4-fs (sda): warning: mounting unchecked fs, running e2fsck is recommended

EXT4-fs (sda): re-mounted. Opts: block_validity,barrier,user_xattr,errors=remount-ro

Starting logging: OK

Initializing random number generator... done.

Starting network: 8139cp 0000:00:0c.0 eth0: link up, 100Mbps, full-duplex, lpa 0x05E1

udhcpc: started, v1.26.2

udhcpc: sending discover

udhcpc: sending select for 10.0.2.15

udhcpc: lease of 10.0.2.15 obtained, lease time 86400

deleting routers

adding dns 10.0.2.3

OK

Welcome to Buildroot

buildroot login:

Log in as root, no password.


You will see that QEMU launches a black window in addition to the one with the kernel boot messages. It is there to display the graphics frame buffer of the target. In this case, the target never writes to the framebuffer, which is why it appears black. To close QEMU, either type Ctrl-Alt-2 to get to the QEMU console and then type quit, or just close the framebuffer window.



Creating a custom BSP


Next, let's use Buildroot to create a BSP for our Nova board using the same versions of U-Boot and Linux from earlier s. You can see the changes I made to Buildroot during this section in the  code archive in

MELP/_06/buildroot.

The recommended places to store your changes are here:

board/<organization>/<device>: This contains any patches, binary blobs, extra build steps, configuration files for Linux, U-Boot, and other components configs/<device>_defconfig: This contains the default configuration for the board
package/<organization>/<package_name>: This is the place to put any additional

packages for this board

Let's begin by creating a directory to store changes for the Nova board:

$ mkdir -p board/melp/nova

Next, clean the artifacts from any previous build, which you should always do when changing configurations:

$ make clean

Now, select the configuration for the BeagleBone, which we are going to use as the basis of the Nova configuration.

$ make beaglebone_defconfig



U-Boot


In  3, All About Bootloaders, we created a custom bootloader for Nova, based on the 2017.01 version of U-Boot and created a patch file for it, which you will find in MELP/_03/0001-BSP-for-Nova.patch. We can configure Buildroot to

select the same version and apply our patch. Begin by copying the patch file into board/melp/nova, and then use make menuconfig to set the U-Boot version to 2017.01, the patch file to board/melp/nova/0001-BSP-for-Nova.patch, and the board name to


Nova, as shown in this screenshot:




We also need a U-Boot script to load the Nova device tree and the kernel from the SD card. We can put the file into board/melp/nova/uEnv.txt. It should contain these commands:

bootpart=0:1

bootdir=

bootargs=console=ttyO0,115200n8 root=/dev/mmcblk0p2 rw rootfstype=ext4 rootwait

uenvcmd=fatload mmc 0:1 88000000 nova.dtb;fatload mmc 0:1 82000000 zImage; bootz 82000000 - 88000000



Linux


In  4, Configuring and Building the Kernel, we based the kernel on Linux 4.9.13 and supplied a new device tree, which is in MELP/_04/nova.dts. Copy the device tree to board/melp/nova, change the Buildroot kernel configuration to select Linux version 4.9.13, and the device tree source to board/melp/nova/nova.dts, as shown in the following screenshot:



We will also have to change the kernel series to be used for kernel headers to match the kernel being built:


Build


In the last stage of the build, Buildroot uses a tool named genimage to create an image for the SD card that we can copy directory to the card. We need a configuration file to layout the image in the right way. We will name the file

board/melp/nova/genimage.cfg and populate it as shown here:

image boot.vfat {

vfat {

files = {

"MLO",

"u-boot.img",

"zImage",

"uEnv.txt",

"nova.dtb",

}

}

size = 16M

}

image sdcard.img {

hdimage {

}

partition u-boot {

partition-type = 0xC

bootable = "true"

image = "boot.vfat"

}

partition rootfs {

partition-type = 0x83

image = "rootfs.ext4"

size = 512M

}

}

This will create a file named sdcard.img, which contains two partitions named u-boot and rootfs. The first contains the boot files listed in boot.vfat, and the second contains the root filesystem image named rootfs.ext4, which will be generated by Buildroot.

Finally, we need to create a post image script that will call genimage, and so create the SD card image. We will put it in board/melp/nova/post-image.sh:

#!/bin/sh

BOARD_DIR="$(dirname $0)"


cp ${BOARD_DIR}/uEnv.txt $BINARIES_DIR/uEnv.txt

GENIMAGE_CFG="${BOARD_DIR}/genimage.cfg"

GENIMAGE_TMP="${BUILD_DIR}/genimage.tmp"

rm -rf "${GENIMAGE_TMP}"

genimage \

--rootpath "${TARGET_DIR}" \

--tmppath "${GENIMAGE_TMP}" \

--inputpath "${BINARIES_DIR}" \

--outputpath "${BINARIES_DIR}" \

--config "${GENIMAGE_CFG}"

This copies the uEnv.txt script into the output/images directory and runs genimage with our configuration file.

Now, we can run menuconfig again and to change the System configuration option, Custom scripts to run before creating filesystem images, to run our post-image.sh script, as shown in this screenshot:


Finally, you can build Linux for the Nova board just by typing make. When it has finished, you will see these files in the directory, output/images/:

boot.vfat
rootfs.ext2
sdcard.img
uEnv.txt
MLO
rootfs.ext4
u-boot.img
zImage
nova.dtb
rootfs.tar
u-boot-spl.bin


To test it, put a microSD card in the card reader, unmount any partitions that are auto mounted, and then copy sdcard.img to the root of the SD card. There is no need to format it beforehand, as we did in the previous , because genimage has created the exact disk layout required. In the following example, my SD card reader is /dev/mmcblk0:

$ sudo umount /dev/mmcblk0*

$ sudo dd if=output/images/sdcard.img of=/dev/mmcblk0 bs=1M

Put the SD card into the BeagleBone Black and power on while pressing the boot button to force it to load from the SD card. You should see that it boots up with our selected versions of U-Boot, Linux, and with the Nova device tree.

Having shown that our custom configuration for the Nova board works, it would be nice to keep a copy of the configuration so that you and others can use it again, which you can do with this command:

$ make savedefconfig BR2_DEFCONFIG=configs/nova_defconfig

Now, you have a Buildroot configuration for the Nova board. Subsequently, you can retrieve this configuration by typing the following command:

$ make nova_defconfig


Adding your own code


Suppose there is a program that you have developed and that you want to include it in the build. You have two options: firstly to build it separately using its own build system, and then roll the binary into the final build as an overlay. Secondly, you could create a Buildroot package that can be selected from the menu and built like any other.


Overlays


An overlay is simply a directory structure that is copied over the top of the Buildroot root filesystem at a late stage in the build process. It can contain executables, libraries, and anything else you may want to include. Note that any compiled code must be compatible with the libraries deployed at runtime, which, in turn, means that it must be compiled with the same toolchain that Buildroot uses. Using the Buildroot toolchain is quite easy. Just add it to PATH:

$ PATH=<path_to_buildroot>/output/host/usr/bin:$PATH

The prefix for the toolchain is <ARCH>-linux-. So, to compile a simple program, you would do something like this:

$   PATH=/home/chris/buildroot/output/host/usr/bin:$PATH

$   arm-linux-gcc helloworld.c -o helloworld

Once you have compiled your program with the correct toolchain, you just need to install the executables and other supporting files into a staging area, and mark it as an overlay for Buildroot. For the helloworld example, you might put it in the

board/melp/nova directory:

$ mkdir -p board/melp/nova/overlay/usr/bin

$ cp helloworld board/melp/nova/overlay/usr/bin

Finally, you set BR2_ROOTFS_OVERLAY to the path to the overlay. It can be configured in menuconfig with the option, System configuration | Root filesystem overlay directories.


Adding a package


Buildroot packages are stored in the package directory, over 2,000 of them, each in its own subdirectory. A package consists of at least two files: Config.in, containing the snippet of Kconfig code required to make the package visible in the configuration menu, and a makefile named <package_name>.mk. Note that the package does not contain the code, just the instructions to get the code by downloading a tarball, doing git pull or whatever is necessary to obtain the upstream source.

The makefile is written in a format expected by Buildroot and contains directives that allow Buildroot to download, configure, compile, and install the program. Writing a new package makefile is a complex operation, which is covered in detail in the Buildroot user manual. Here is an example which shows you how to create a package for a simple program stored locally, such as our helloworld program.

Begin by creating the package/helloworld/ subdirectory with a configuration file,

Config.in, which looks like this:

config BR2_PACKAGE_HELLOWORLD

bool "helloworld"

help

A friendly program that prints Hello World! every 10s

The first line must be of the format, BR2_PACKAGE_<uppercase package name>. This is followed by a Boolean and the package name, as it will appear in the configuration menu, which will allow a user to select this package. The help section is optional (but hopefully useful).

Next, link the new package into the Target Packages menu by editing package/Config.in and sourcing the configuration file as mentioned in the preceding section. You could append this to an existing submenu but, in this case, it seems neater to create a new submenu, which only contains our package:

menu "My programs"

source "package/helloworld/Config.in"

endmenu

Then, create a makefile, package/helloworld/helloworld.mk, to supply the data needed by Buildroot:

HELLOWORLD_VERSION = 1.0.0

HELLOWORLD_SITE = /home/chris/MELP/helloworld

HELLOWORLD_SITE_METHOD = local

define HELLOWORLD_BUILD_CMDS

$(MAKE) CC="$(TARGET_CC)" LD="$(TARGET_LD)" -C $(@D) all endef

define HELLOWORLD_INSTALL_TARGET_CMDS

$(INSTALL) -D -m 0755 $(@D)/helloworld $(TARGET_DIR)/usr/bin/helloworld endef

$(eval $(generic-package))

You can find my helloworld package in the  code archive in

MELP/_06/buildroot/package/helloworld and the source code for the program in

MELP/_06/helloworld. The location of the code is hard coded to a local pathname. In a more realistic case, you would get the code from a source code system or from a central server of some kind: there are details of how to do this in the Buildroot user manual and plenty of examples in other packages.


License compliance


Buildroot is based on an open source software as are the packages it compiles. At some point during the project, you should check the licenses, which you can do by running:

$ make legal-info

The information is gathered into output/legal-info/. There are summaries of the licenses used to compile the host tools in host-manifest.csv and, on the target, in manifest.csv. There is more information in the README file and in the Buildroot user manual.


The Yocto Project


The Yocto Project is a more complex beast than Buildroot. Not only can it build toolchains, bootloaders, kernels, and root filesystems as Buildroot can, but it can generate an entire Linux distribution for you with binary packages that can be installed at runtime. The Yocto Project is primarily a group of recipes, similar to Buildroot packages but written using a combination of Python and shell script, together with a task scheduler called BitBake that produces whatever you have configured, from the recipes.

There is plenty of online documentation at https://www.yoctoproject.org/.


Background


The structure of the Yocto Project makes more sense if you look at the background first. It's roots are in OpenEmbedded, http://openembedded.org/, which, in turn, grew out of a number of projects to port Linux to various hand-held computers, including the Sharp Zaurus and the Compaq iPaq. OpenEmbedded, which came to life in 2003 as the build system for those hand-held computers. Soon after, other developers began to use it as a general build asystem for devices running embedded Linux. It was developed, and continues to be developed, by an enthusiastic community of programmers.

The OpenEmbedded project is set out to create a set of binary packages using the compact IPK format, which could then be combined in various ways to create a target system and be installed on the target at runtime. It did this by creating recipes for each package and using BitBake as the task scheduler. It was, and is, very flexible. By supplying the right metadata, you can create an entire Linux distribution to your own specification. One that, which is fairly well-known is the Ångström Distribution, http://www.angstrom-distribution.org, but there are many

others as well.

At some time in 2005, Richard Purdie, then a developer at OpenedHand, created a fork of OpenEmbedded, which had a more conservative choice of packages and created releases that were stable over a period of time. He named it Poky after the Japanese snack (if you are worried about these things, Poky is pronounced to rhyme with hockey). Although Poky was a fork, OpenEmbedded and Poky continued to run alongside each other, sharing updates and keeping the architectures more or less in step. Intel brought out OpenedHand in 2008, and they transferred Poky Linux to the Linux Foundation in 2010 when they formed the Yocto Project.

Since 2010, the common components of OpenEmbedded and Poky have been combined into a separate project known as OpenEmbedded Core or just OE-Core.

Therefore, the Yocto Project collects together several components, the most

important of which are the following:

OE-Core: This is the core metadata, which is shared with OpenEmbedded BitBake: This is the task scheduler, which is shared with OpenEmbedded and
other projects

Poky: This is the reference distribution
Documentation: This is the user's manuals and developer's guides for each component
Toaster: This is a web-based interface to BitBake and its metadata
ADT Eclipse: This is a plugin for Eclipse

The Yocto Project provides a stable base, which can be used as it is or can be extended using meta layers, which I will discuss later in this . Many SoC vendors provide BSPs for their devices in this way. Meta layers can also be used to create extended or just different build systems. Some are open source, such as the Ångström Distribution, and others are commercial, such as MontaVista Carrier Grade Edition, Mentor Embedded Linux, and Wind River Linux. The Yocto Project has a branding and compatibility testing scheme to ensure that there is interoperability between components. You will see statements like Yocto Project compatible on various web pages.

Consequently, you should think of the Yocto Project as the foundation of a whole sector of embedded Linux, as well as being a complete build system in its own right.

You maybe wondering about the name, Yocto. yocto is the SI prefix for 10-24, in the same way that micro is 10-6. Why name the project Yocto? It was partly to indicate that it could build very small Linux systems (although, to be fair, so can other build systems), but also to steal a march on the Ångström Distribution, which is based on OpenEmbedded. An Ångström is 10-10. That's huge, compared to a
yocto!


Stable releases and supports


Usually, there is a release of the Yocto Project every six months: in April and October. They are principally known by the code name, but it is useful to know the version numbers of the Yocto Project and Poky as well. Here is a table of the six most recent releases at the time of writing:

Code name
Release date
Yocto version
Poky version




Morty
October 2016
2.2
16




Krogoth
April 2016
2.1
15




Jethro
October 2015
2.0
14




Fido
April 2015
1.8
13




Dizzy
October 2014
1.7
12




Daisy
April 2014
1.6
11








The stable releases are supported with security and critical bug fixes for the current release cycle and the next cycle. In other words, each version is supported for approximately 12 months after the release. As with Buildroot, if you want continued support, you can update to the next stable release, or you can backport changes to your version. You also have the option of commercial support for periods of several years with the Yocto Project from operating system vendors, such as Mentor Graphics, Wind River, and many others.


Installing the Yocto Project


To get a copy of the Yocto Project, you can either clone the repository, choosing the code name as the branch, which is morty in this case:

$ git clone -b morty git://git.yoctoproject.org/poky.git


to-2.2/poky-morty-16.0.0.tar.bz2. In the first case, you will find everything in the directory, poky/, in the second case, poky-morty-16.0.0/.

In addition, you should read the section titled System Requirements from the


al.html#detailed-supported-distros); and, in particular, you should make sure that the packages listed there are installed on your host computer.


Configuring


As with Buildroot, let's begin with a build for the QEMU ARM emulator. Begin by sourcing a script to set up the environment:

$ cd poky

$ source oe-init-build-env

This creates a working directory for you named build/ and makes it the current directory. All of the configuration, intermediate, and target image files will be put in this directory. You must source this script each time you want to work on this project.

You can choose a different working directory by adding it as a parameter to oe-

init-build-env, for example:

$ source oe-init-build-env build-qemuarm

This will put you into the directory: build-qemuarm/ . This way you can have several build directories, each for a different project: you choose which one you want to work with through the parameter to oe-init-build-env.

Initially, the build directory contains only one subdirectory named conf/, which contains the configuration files for this project:

local.conf: This contains a specification of the device you are going to build and the build environment.
bblayers.conf: This contains paths of the meta layers you are going to use. I will describe layers later on.
templateconf.cfg: This contains the name of a directory, which contains various conf files. By default, it points to meta-poky/conf/.

For now, we just need to set the MACHINE variable in local.conf to qemuarm by removing the comment character (#) at the start of this line:


MACHINE ?= "qemuarm"


Building


To actually perform the build, you need to run BitBake, telling it which root filesystem image you want to create. Some common images are as follows:

core-image-minimal: This is a small console-based system which is useful for tests and as the basis for custom images. core-image-minimal-initramfs: This is similar to core-image-minimal, but built as a ramdisk.

core-image-x11: This is a basic image with support for graphics through an X11 server and the xterminal terminal app.
core-image-sato: This is a full graphical system based on Sato, which is a mobile graphical environment built on X11, and GNOME. The image includes several apps including a Terminal, an editor, and a file manager.

By giving BitBake the final target, it will work backwards and build all the dependencies first, beginning with the toolchain. For now, we just want to create a minimal image to see how it works:

$ bitbake core-image-minimal

The build is likely to take some time, probably more than an hour. It will download about 4 GiB of source code, and it will consume about about 24 GiB of disk space. When it is complete, you will find several new directories in the build directory including downloads/, which contains all the source downloaded for the build, and tmp/, which contains most of the build artifacts. You should see the following in tmp/:

work/: This contains the build directory and the staging area for the root filesystem.
deploy/: This contains the final binaries to be deployed on the target:

deploy/images/[machine name]/: Contains the bootloader, the kernel, and the root filesystem images ready to be run on the target
deploy/rpm/: This contains the RPM packages that went to make up the images
deploy/licenses/: This contains the license files extracted from

each package


Running the QEMU target


When you build a QEMU target, an internal version of QEMU is generated, which removes the need to install the QEMU package for your distribution, and thus avoids version dependencies. There is a wrapper script named runqemu to run this version of QEMU.

To run the QEMU emulation, make sure that you have sourced oe-init-build-env, and then just type this:

$ runqemu qemuarm

In this case, QEMU has been configured with a graphic console so that the boot messages and login prompt appear in the black framebuffer, as shown in the following screenshot:


You can login as root, without a password. You can close down QEMU by closing the framebuffer window.

You can launch QEMU without the graphic window by adding nographic to the command line:

$ runqemu qemuarm nographic

In this case, you close QEMU using the key sequence Ctrl + A and then x.

The runqemu script has many other options. Type runqemu help for more information.


Layers


The metadata for the Yocto Project is structured into layers. By convention, each layer has a name beginning with meta. The core layers of the Yocto Project are as follows:

meta: This is the OpenEmbedded core with some changes for Poky

meta-poky: This is the metadata specific to the Poky distribution meta-yocto-bsp: This contains the board support packages for the machines that the Yocto Project supports

The list of layers in which BitBake searches for recipes is stored in

<your build directory>/conf/bblayers.conf and, by default, includes all three layers

mentioned in the preceding list.

By structuring the recipes and other configuration data in this way, it is very easy to extend the Yocto Project by adding new layers. Additional layers are available from SoC manufacturers, the Yocto Project itself, and a wide range of people wishing to add value to the Yocto Project and OpenEmbedded. There is a useful list of layers at http://layers.openembedded.org/layerindex/branch/master/layers/. Here are some

examples:

meta-angstrom: The Ångström distribution

meta-qt5: Qt 5 libraries and utilities

meta-intel: BSPs for Intel CPUs and SoCs

meta-ti: BSPs for TI ARM-based SoCs

Adding a layer is as simple as copying the meta directory into a suitable location, usually alongside the default meta layers and adding it to bblayers.conf. Make sure that you read the REAMDE file that should accompany each layer to see what dependencies it has on other layers and which versions of the Yocto Project it is compatible with.

To illustrate the way that layers work, let's create a layer for our Nova board, which we can use for the remainder of the  as we add features. You can see the complete implementation of the layer in the code archive in

MELP/_06/poky/meta-nova.

Each meta layer has to have at least one configuration file, named conf/layer.conf, and it should also have the README file and a license. There is a handy helper script that does the basics for us:

$ cd poky

$ scripts/yocto-layer create nova

The script asks for a priority, and whether you want to create sample recipes. In the example here, I just accepted the defaults:

Please enter the layer priority you'd like to use for the layer:

[default: 6]

Would you like to have an example recipe created? (y/n) [default: n] Would you like to have an example bbappend file created? (y/n) [default: n]

New layer created in meta-nova.

Don't forget to add it to your BBLAYERS (for details see meta-nova/README).

This will create a layer named meta-nova with a conf/layer.conf, an outline README and

an MIT LICENSE in COPYING.MIT. The layer.conf file looks like this:

#   We have a conf and classes directory, add to BBPATH BBPATH .= ":${LAYERDIR}"

#   We have recipes-* directories, add to BBFILES BBFILES += "${LAYERDIR}/recipes-*/*/*.bb ${LAYERDIR}/recipes-*/*/*.bbappend"

BBFILE_COLLECTIONS += "nova"

BBFILE_PATTERN_nova = "^${LAYERDIR}/"

BBFILE_PRIORITY_nova = "6"

It adds itself to BBPATH and the recipes it contains to BBFILES. From looking at the code, you can see that the recipes are found in the directories with names beginning recipes- and have filenames ending in .bb (for normal BitBake recipes) or .bbappend (for recipes that extend existing recipes by overriding or adding to the instructions). This layer has the name nova added to the list of layers in BBFILE_COLLECTIONS and has a priority of 6. The layer priority is used if the same recipe appears in several layers: the one in the layer with the highest priority wins.

Since you are about to build a new configuration, it is best to begin by creating a new build directory named build-nova:

$ cd ~/poky

$ source oe-init-build-env build-nova

Now, you need to add this layer to your build configuration using the command:

$ bitbake-layers add-layer ../meta-nova

You can confirm that it is set up correctly like this:

$ bitbake-layers show-layers

layer                                          path                                                  priority
==========================================================
meta
/home/chris/poky/meta
5
meta-yocto
/home/chris/poky/meta-yocto
5
meta-yocto-bsp
/home/chris/poky/meta-poky-bsp   5
meta-nova
/home/chris/poky/meta-nova
6

There you can see the new layer. It has a priority 6, which means that we could override recipes in the other layers, which all have a lower priority.

At this point, it would be a good idea to run a build, using this empty layer. The final target will be the Nova board but, for now, build for a BeagleBone Black by removing the comment before MACHINE ?= "beaglebone" in conf/local.conf. Then,
build a small image using bitbake core-image-minimal as before.

As well as recipes, layers may contain BitBake classes, configuration files for machines, distributions, and more. I will look at recipes next and show you how to create a customized image and how to create a package.


BitBake and recipes


BitBake processes metadata of several different types, which include the following:

Recipes: Files ending in .bb. These contain information about building a unit of software, including how to get a copy of the source code, the dependencies on other components, and how to build and install it. Append: Files ending in .bbappend. These allow some details of a recipe to be overridden or extended. A bbappend file simply appends its instructions to the end of a recipe (.bb) file of the same root name.
Include: Files ending in .inc. These contain information that is common to several recipes, allowing information to be shared among them. The files maybe included using the include or require keywords. The difference is that require produces an error if the file does not exist, whereas include does not.
Classes: Files ending in .bbclass. These contain common build information, for example, how to build a kernel or how to build an autotools project. The classes are inherited and extended in recipes and other classes using the inherit keyword. The class classes/base.bbclass is implicitly inherited in every recipe.
Configuration: Files ending in .conf. They define various configuration variables that govern the project's build process.

A recipe is a collection of tasks written in a combination of Python and shell

script. The tasks have names such as do_fetch, do_unpack, do_patch, do_configure,

do_compile, and do_install. You use BitBake to execute these tasks. The default task is do_build, which performs all the subtasks required to build the recipe. You can list the tasks available in a recipe using bitbake -c listtasks [recipe]. For example, you can list the tasks in core-image-minimal like this:

$   bitbake -c listtasks core-image-minimal [...]

core-image-minimal-1.0-r0 do_listtasks: do_build

core-image-minimal-1.0-r0 do_listtasks: do_bundle_initramfs

core-image-minimal-1.0-r0 do_listtasks: do_checkuri

core-image-minimal-1.0-r0 do_listtasks: do_checkuriall

core-image-minimal-1.0-r0 do_listtasks: do_clean [...]

In fact, -c is the option that tells BitBake to run a specific task in a recipe with the task being named with the do_ part stripped off. The task do_listtasks is simply a special task that lists all the tasks defined within a recipe. Another example is the fetch task, which downloads the source code for a recipe:

$ bitbake -c fetch busybox

You can also use the fetchall task to get the code for the target and all the dependencies, which is useful if you want to make sure you have downloaded all the code for the image you are about to build:

$ bitbake -c fetchall core-image-minimal

The recipe files are are usually named <package-name>_<version>.bb. They may have dependencies on other recipes, which would allow BitBake to work out all the subtasks that need to be executed to complete the top level job.

As an example, to create a recipe for our helloworld program in meta-nova, you would create a directory structure like this:

meta-nova/recipes-local/helloworld

├── files

       └── helloworld.c └── helloworld_1.0.bb

The recipe is helloworld_1.0.bb and the source is local to the recipe directory in the subdirectory files/. The recipe contains these instructions:

DESCRIPTION = "A friendly program that prints Hello World!"

PRIORITY = "optional"

SECTION = "examples"

LICENSE = "GPLv2"

LIC_FILES_CHKSUM = "file://${COMMON_LICENSE_DIR}/GPL-2.0; md5=801f80980d171dd6425610833a22dbe6"

SRC_URI = "file://helloworld.c"

S = "${WORKDIR}"

do_compile() {

${CC} ${CFLAGS} ${LDFLAGS} helloworld.c -o helloworld

}

do_install() {

install -d ${D}${bindir}

install -m 0755 helloworld ${D}${bindir}

}

The location of the source code is set by SRC_URI:. In this case, the file:// URI means that the code is local to the recipe directory. BitBake will search directories, files/, helloworld/, and helloworld-1.0/ relative to the directory that contains the recipe. The tasks that need to be defined are do_compile and do_install, which compile the one source file and install it into the target root filesystem: ${D} expands to the staging area of the recipe and ${bindir} to the default binary directory, /usr/bin.

Every recipe has a license, defined by LICENSE, which is set to GPL V2 here.

The file containing the text of the license and a checksum is defined by

LIC_FILES_CHKSUM. BitBake will terminate the build if the checksum does not match,

indicating that the license has changed in some way. The license file may be part

of the package or it may point to one of the standard license texts in

meta/files/common-licenses/, as is the case here.

By default, commercial licenses are disallowed, but it is easy to enable them.

You need to specify the license in the recipe, as shown here:

LICENSE_FLAGS = "commercial"

Then, in your conf/local.conf, you would explicitly allow this license, like so:

LICENSE_FLAGS_WHITELIST = "commercial"

Now, to make sure that our helloworld recipe compiles correctly, you can ask

BitBake to build it, like so:

$ bitbake helloworld

If all goes well, you should see that it has created a working directory for it in

tmp/work/cortexa8hf-vfp-neon-poky-linux-gnueabi/helloworld/. You should also see there

is an RPM package for it in tmp/deploy/rpm/cortexa8hf_vfp_neon/helloworld-1.0-r0.cortexa8hf_vfp_neon.rpm.

It is not part of the target image yet, though. The list of packages to be installed is held in a variable named IMAGE_INSTALL. You can append to the end of that list by adding this line to conf/local.conf:

IMAGE_INSTALL_append = " helloworld"


Note that there has to be a space between the opening double quote and the first

package name. Now, the package will be added to any image that you bitbake:

$ bitbake core-image-minimal

If you look in tmp/deploy/images/beaglebone/core-image-minimal-beaglebone.tar.bz2, you

will see that /usr/bin/helloworld has indeed been installed.


Customizing images via local.conf


You may often want to add a package to an image during development or tweak it in other ways. As shown previously, you can simply append to the list of packages to be installed by adding a statement like this:

IMAGE_INSTALL_append = " strace helloworld"

You can make more sweeping changes via EXTRA_IMAGE_FEATURES. Here is a short list which should give you an idea of the features you can enable:

dbg-pkgs: This installs debug symbol packages for all the packages installed in the image.
debug-tweaks: This allows root logins without passwords and other changes that make development easier.
package-management: This installs package management tools and preserves the package manager database.
read-only-rootfs: This makes the root filesystem read-only. We will cover this in more detail in  7, Creating a Storage Strategy.
x11: This installs the X server.

x11-base: This installs the X server with a minimal environment.

x11-sato: This installs the OpenedHand Sato environment.

There are many more features that you can add in this way. I recommend you look at the Image Features section of the Yocto Project Reference Manual and also read through the code in meta/classes/core-image.bbclass.


Writing an image recipe


The problem with making changes to local.conf is that they are, well, local. If you want to create an image that is to be shared with other developers or to be loaded onto a production system, then you should put the changes into an image recipe.

An image recipe contains instructions about how to create the image files for a target, including the bootloader, the kernel, and the root filesystem images. By convention, image recipes are put into a directory named images, so you can get a list of all the images that are available by using this command:

$ ls meta*/recipes*/images/*.bb

You will find that the recipe for core-image-minimal is in meta/recipes-core/images/core-image-minimal.bb.

A simple approach is to take an existing image recipe and modify it using statements similar to those you used in local.conf.

For example, imagine that you want an image that is the same as core-image-minimal but includes your helloworld program and the strace utility. You can do that with a two-line recipe file, which includes (using the require keyword) the base image and adds the packages you want. It is conventional to put the image in a directory named images, so add the recipe nova-image.bb with this content in meta-nova/recipes-local/images:

require recipes-core/images/core-image-minimal.bb IMAGE_INSTALL += "helloworld strace"

Now, you can remove the IMAGE_INSTALL_append line from your local.conf and build it using this:

$ bitbake nova-image


Creating an SDK


It is very useful to be able to create a standalone toolchain that other developers can install, avoiding the need for everyone in the team to have a full installation of the Yocto Project. Ideally, you want the toolchain to include development libraries and header files for all the libraries installed on the target. You can do that for any image using the populate_sdk task, as shown here:

$ bitbake -c populate_sdk nova-image

The result is a self-installing shell script in tmp/deploy/sdk:

poky-<c_library>-<host_machine>-<target_image><target_machine> -toolchain-<version>.sh

For the SDK built with the nova-image recipe, it is this:

poky-glibc-x86_64-nova-image-cortexa8hf-neon-toolchain-2.2.1.sh

If you only want a basic toolchain with just C and C++ cross compilers, the C-library and header files, you can instead run this:

$ bitbake meta-toolchain

To install the SDK, just run the shell script. The default install directory is /opt/poky, but the install script allows you to change this:

$   tmp/deploy/sdk/poky-glibc-x86_64-nova-image-cortexa8hf-neon-toolchain-2.2.1.sh
Poky (Yocto Project Reference Distro) SDK installer version 2.2.1
=================================================================

Enter target directory for SDK (default: /opt/poky/2.2.1):

You are about to install the SDK to "/opt/poky/2.2.1". Proceed[Y/n]? [sudo] password for chris:
Extracting SDK...........................done Setting it up...done


To make use of the toolchain, first source the environment and set up the script:

$   source /opt/poky/2.2.1/environment-setup-cortexa8hf-neon-poky -linux-gnueabi


The environment-setup-* script that sets things up for the SDK is not  compatible with the oe-init-build-env script that you source when

 working in the Yocto Project build directory. It is a good rule to always start a new terminal session before you source either script.

The toolchain generated by Yocto Project does not have a valid sysroot directory:

$   arm-poky-linux-gnueabi-gcc -print-sysroot /not/exist


Consequently, if you try to cross compile, as I have shown in previous s, it will fail like this:

$   arm-poky-linux-gnueabi-gcc helloworld.c -o helloworld helloworld.c:1:19: fatal error: stdio.h: No such file or directory #include <stdio.h>

^

compilation terminated.

This is because the compiler has been configured to work for a wide range of ARM processors, and the fine tuning is done when you launch it using the right set of flags. Instead, you should use the shell variables that are created when you source the environment-setup script for cross compiling. They include these:

CC: The C compiler

CXX: The C++ compiler

CPP: The C preprocessor

AS: The assembler

LD: The linker

As an example, this is what we find that CC has been set to this:

$ echo $CC

arm-poky-linux-gnueabi-gcc -march=armv7-a -mfpu=neon -mfloat-abi=hard -mcpu=cortex-a8 --sysroot=/opt/poky/ 2.2.1/sysroots/cortexa8hf-neon-poky-linux-gnueabi

So long as you use $CC to compile, everything should work fine:

$ $CC helloworld.c -o helloworld


TLDR;

 Using a build system takes the hard work out of creating an embedded Linux system, and it is almost always better than hand crafting a roll-your-own system. There is a range of open source build systems available these days: Buildroot and the Yocto Project represent two different approaches. Buildroot is simple and quick, making it a good choice for fairly simple single-purpose devices: traditional embedded Linux as I like to think of them. The Yocto Project is more complex and flexible. It is package based, meaning that you have the option to install a package manager and perform updates of individual packages in the field. The meta layer structure makes it easy to extend the metadata, and indeed there is good support throughout the community and industry for the Yocto Project. The downside is that there is a very steep learning curve: you should expect it to take several months to become proficient with it, and even then it will sometimes do things that you don't expect, or at least that is my experience.

Don't forget that any devices you create using these tools will need to be maintained in the field for a period of time, often many years. Both Yocto Project and Buildroot provide point releases for about one year after the initial release. In either case, you will find yourself having to maintain your release yourself or else paying for commercial support. The third possibility, ignoring the problem, should not be considered an option!

In the next , I will look at file storage and filesystems, and at the way that the choices you make there will affect the stability and maintainability of your embedded Linux.



Comments