The four elements of
embedded Linux
Every project begins by
obtaining, customizing, and deploying these four elements: the toolchain, the
bootloader, the kernel, and the root filesystem. This is the topic of the first
section of this.
Toolchain: The compiler and other tools needed to create code for your target device. Everything else depends
on the toolchain.
Bootloader: The program that initializes the board and loads the Linux kernel.
Kernel:
This is the heart of the system, managing system resources and interfacing with hardware.
Root filesystem: Contains the libraries and programs that are run once the kernel has completed its
initialization.
Of course, there is also a
fifth element, not mentioned here. That is the collection of programs specific
to your embedded application which make the device do whatever it is supposed
to do, be it weigh groceries, display movies, control a robot, or fly a drone.
Typically, you will be
offered some or all of these elements as a package when you buy your SoC or
board. But, for the reasons mentioned in the preceding paragraph, they may not
be the best choices for you. I will give you the background to make the right
selections in the first six and I will introduce you to two tools that
automate the whole process for you: Buildroot and the Yocto Project.
Hardware for embedded Linux
If you are designing or
selecting hardware for an embedded Linux project, what do you look out for?
Firstly, a CPU architecture
that is supported by the kernel—unless you plan to add a new architecture yourself,
of course! Looking at the source code for Linux 4.9, there are 31
architectures, each represented by a sub-directory in the arch/ directory. They
are all 32- or 64-bit architectures, most with a memory management unit (MMU), but some without. The ones most
often found in embedded devices are
ARM, MIPS PowerPC, and X86, each in 32- and 64-bit variants, and all of which
have memory management units.
Most of this kis written with this class of
processor in mind. There is another group that doesn't have an MMU that runs a
subset of Linux known as microcontroller
Linux or uClinux. These
processor architectures include ARC, Blackfin,
MicroBlaze, and Nios. I will mention uClinux from time to time but I will not
go into detail because it is a rather specialized topic.
Secondly, you will need a
reasonable amount of RAM. 16 MiB is a good minimum, although it is quite
possible to run Linux using half that. It is even possible to run Linux with 4
MiB if you are prepared to go to the trouble of optimizing every part of the
system. It may even be possible to get lower, but there comes a point at which
it is no longer Linux.
Thirdly, there is
non-volatile storage, usually flash memory. 8 MiB is enough for a simple device
such as a webcam or a simple router. As with RAM, you can create a workable
Linux system with less storage if you really want to, but the lower you go, the
harder it becomes. Linux has extensive support for flash storage devices,
including raw NOR and NAND flash chips, and managed flash in the form of SD
cards, eMMC chips, USB flash memory, and so on.
Fourthly, a debug port is
very useful, most commonly an RS-232 serial port. It does not have to be fitted
on production boards, but makes board bring-up, debugging, and development much
easier.
few years ago, boards would have been fitted with a Joint Test Action Group
(JTAG) interface for this
purpose, but modern SoCs have the ability to load boot
code directly from removable media, especially SD and micro SD cards, or
serial
interfaces such as RS-232 or USB.
In addition to these basics, there are
interfaces to the specific bits of hardware your device needs to get its job
done. Mainline Linux comes with open source drivers for many thousands of
different devices, and there are drivers (of variable quality) from the SoC
manufacturer and from the OEMs of third-party chips that may be included in the
design, but remember my comments on the commitment and ability of some
manufacturers. As a developer of embedded devices, you will find that you spend
quite a lot of time evaluating and adapting third-party code, if you have it,
or liaising with the manufacturer if you don't. Finally, you will have to write
the device support for interfaces that are unique to the device, or find
someone to do it for you.
The BeagleBone Black
The BeagleBone and the
later BeagleBone Black are open hardware designs for a small, credit card sized
development board produced by CircuitCo LLC. The main repository of information
is at https://beagleboard.org/. The main points of the specifications are:
TI AM335x 1 GHz ARM®
Cortex-A8 Sitara SoC 512 MiB DDR3 RAM
2 or 4 GiB 8-bit eMMC
on-board flash storage Serial port for debug and development
MicroSD connector, which can be used as the boot
device
Mini USB OTG client/host port that can also be
used to power the board
Full size USB 2.0 host port
10/100 Ethernet port
HDMI for video and audio output
In addition, there are two
46-pin expansion headers for which there are a great variety of daughter
boards, known as capes, which allow
you to adapt the board to do many different things. However, you do not need to
fit any capes in the examples in this.
In addition to the board itself, you will need:
A mini USB to full-size USB
cable (supplied with the board) to provide power, unless you have the last item
on this list.
An RS-232 cable
that can interface with the 6-pin 3.3V TTL level signals provided by the board.
The Beagleboard website has links to compatible cables.
A microSD card and a means
of writing to it from your development PC or laptop, which will be needed to
load software onto the board.
An Ethernet
cable, as some of the examples require network connectivity. Optional, but
recommended, a 5V power supply capable of delivering 1 A or more.
QEMU
QEMU is a machine emulator.
It comes in a number of different flavors, each of which can emulate a
processor architecture and a number of boards built using that architecture.
For example, we have the following:
qemu-system-arm: ARM
qemu-system-mips: MIPS
qemu-system-ppc: PowerPC
qemu-system-x86: x86 and x86_64
For each architecture, QEMU
emulates a range of hardware, which you can see by using the option—machine
help. Each machine emulates most of the hardware that would normally be found
on that board. There are options to link hardware to local resources, such as
using a local file for the emulated disk drive. Here is a concrete example:
$ qemu-system-arm -machine vexpress-a9 -m 256M -drive file=rootfs.ext4,sd
-net nic -net use -kernel zImage -dtb vexpress-v2p-ca9.dtb -append "console=ttyAMA0,115200
root=/dev/mmcblk0" - serial stdio -net nic,model=lan9118 -net
tap,ifname=tap0
The options used in the preceding command line are:
-machine vexpress-a9: Creates an emulation of
an ARM Versatile Express development
board with a Cortex A-9 processor -m 256M: Populates it with 256 MiB of RAM
-drive file=rootfs.ext4,sd: Connects the SD interface
to the local file
rootfs.ext4 (which
contains a filesystem image)
-kernel zImage: Loads the Linux kernel from the local file named zImage
-dtb vexpress-v2p-ca9.dtb: Loads the device tree from the local file vexpress-v2p-
ca9.dtb
-append "...": Supplies this string as the kernel
command-line
-serial stdio: Connects the serial port
to the terminal that launched QEMU, usually so that you can log on to the emulated
machine via the serial console
-net nic,model=lan9118: Creates a network interface
To configure the host side
of the network, you need the tunctl
command from the User Mode Linux (UML) project; on Debian and Ubuntu, the
package is named
uml-utilites:
$ sudo
tunctl -u $(whoami) -t tap0
This creates a network
interface named tap0 which is connected to the
network controller in the emulated QEMU machine. You configure tap0 in exactly the same way as
any other interface.
All of these options are
described in detail in the following. I will be using Versatile
Express for most of my examples, but it should be easy to use a different
machine or architecture.
Learning About Toolchains
The toolchain is the first
element of embedded Linux and the starting point of your project. You will use
it to compile all the code that will run on your device. The choices you make
at this early stage will have a profound impact on the final outcome. Your
toolchain should be capable of making effective use of your hardware by using
the optimum instruction set for your processor. It should support the languages
that you require, and have a solid implementation of the Portable Operating System Interface (POSIX) and other system interfaces. Not only that, but it should be updated when security flaws are
discovered or bugs are found. Finally, it should be constant throughout the
project. In other words, once you have chosen your toolchain, it is important
to stick with it. Changing compilers and development libraries in an
inconsistent way during a project will lead to subtle bugs.
Obtaining a toolchain can
be as simple as downloading and installing a TAR file, or it can be as complex
as building the whole thing from source code. In this, I take the
latter approach, with the help of a tool called crosstool-NG, so that I can show you the details of creating a
toolchain. Later on in, Selecting a Build System, I
will switch to using the toolchain generated by the build system, which is the more usual means of obtaining a
toolchain.
In this, we will cover the following topics:
Introducing toolchains
Finding a toolchain
Building a toolchain using the crosstool-NG tool
Anatomy of a toolchain
Linking with libraries--static and dynamic
linking
The art of cross compiling
Introducing toolchains
A toolchain is the set of
tools that compiles source code into executables that can run on your target
device, and includes a compiler, a linker, and run-time libraries. Initially
you need one to build the other three elements of an embedded Linux system: the
bootloader, the kernel, and the root filesystem. It has to be able to compile
code written in assembly, C, and C++ since these are the languages used in the
base open source packages.
Usually,
toolchains for Linux are based on components from the GNU project (ht tp://www.gnu.org), and that is
still true in the majority of cases at the time of writing. However, over the past few years, the Clang compiler and the associated Low Level Virtual Machine (LLVM) project (http://llvm.org) have progressed
to the point that it is now a viable
alternative to a GNU toolchain. One major distinction between LLVM and
GNU-based toolchains is the licensing; LLVM has a BSD license while GNU has the
GPL. There are some technical advantages to Clang as well, such as faster
compilation and better diagnostics, but GNU GCC has the advantage of
compatibility with the existing code base and support for a wide range of
architectures and operating systems. Indeed, there are still some areas where
Clang cannot replace the GNU C compiler, especially when it comes to compiling
a mainline Linux kernel. It is probable that, in the next year or so, Clang
will be able to compile all the components needed for embedded Linux and so will
become an alternative to GNU. There is a good description of how to use Clang for cross compilation at http://clang.llvm.org/d ocs/CrossCompilation.html. If you would like to use it as part of an embedded
Linux build system, the EmbToolkit (https://www.embtoolkit.org) fully supports both GNU and
LLVM/Clang toolchains, and various people are working on using Clang with Buildroot and the
Yocto Project. I will cover embedded build systems in Cha pter 6, Selecting a Build System. Meanwhile, this focuses on the GNU toolchain as it is the only complete option at this time.
A standard GNU toolchain consists of three main components:
Binutils: A set of binary utilities including the
assembler and the linker. It is
GNU Compiler Collection (GCC):
These are the compilers for C and other
languages which, depending on the version of GCC, include C++, Objective-C,
Objective-C++, Java, Fortran, Ada, and Go. They all use a common backend which
produces assembler code, which is fed to the GNU assembler. It is available at http://gcc.gnu.org/.
C library: A standardized application program interface (API) based on the POSIX
specification, which is the main interface to the operating system kernel for
applications. There are several C libraries to consider, as we shall see later
on in this .
As well as these, you will
need a copy of the Linux kernel headers, which contain definitions and constants
that are needed when accessing the kernel directly. Right now, you need them to
be able to compile the C library, but you will also need them later when
writing programs or compiling libraries that interact with particular Linux
devices, for example, to display graphics via the Linux frame buffer driver.
This is not simply a question of making a copy of the header files in the
include directory of your kernel source code. Those headers are intended for
use in the kernel only and contain definitions that will cause conflicts if
used in their raw state to compile regular Linux applications.
Instead, you will need to
generate a set of sanitized kernel headers, which I have illustrated in, Building a Root Filesystem.
It is not usually crucial
whether the kernel headers are generated from the exact version of Linux you
are going to be using or not. Since the kernel interfaces are always
backwards-compatible, it is only necessary that the headers are from a kernel
that is the same as, or older than, the one you are using on the target.
Most people would consider
the GNU Debugger (GDB) to be part of the toolchain as
well, and it is usual that it is built at this point. I will talk about GDB in Debugging with GDB.
Types of toolchains
For our purposes, there are two types of toolchain:
Native: This toolchain runs on
the same type of system (sometimes the same
actual system) as the programs it generates. This is the usual case for
desktops and servers, and it is becoming popular on certain classes of embedded
devices. The Raspberry Pi running Debian for ARM, for example, has self-hosted
native compilers.
Cross: This toolchain runs on a
different type of system than the target,
allowing the development to be done on a fast desktop PC and then loaded
onto the embedded target for testing.
Almost all embedded Linux development is done
using a cross development toolchain, partly because most embedded devices are
not well suited to program development since they lack computing power, memory,
and storage, but also because it keeps the host and target environments
separate. The latter point is especially important when the host and the target
are using the same architecture, x86_64, for example. In this case, it is
tempting to compile natively on the host and simply copy the binaries to the
target.
This works up to a point,
but it is likely that the host distribution will receive updates more often
than the target, or that different engineers building code for the target will
have slightly different versions of the host development libraries. Over time,
the development and target systems will diverge and you will violate the
principle that the toolchain should remain constant throughout the life of the
project. You can make this approach work if you ensure that the host and the
target build environments are in lockstep with each other. However, a much
better approach is to keep the host and the target separate, and a cross
toolchain is the way to do that.
However, there is a counter
argument in favor of native development. Cross development creates the burden
of cross-compiling all the libraries and tools that you need for your target.
We will see later in this that cross-compiling is not always simple
because many open source packages are not designed to be
built in this way.
Integrated build tools, including Buildroot and the Yocto Project, help by
encapsulating the rules to cross compile a range of packages that you need in
typical embedded systems, but if you want to compile a large number of
additional packages, then it is better to natively compile them. For example,
building a Debian distribution for the Raspberry Pi or BeagleBone using a cross
compiler would be very hard. Instead, they are natively compiled. Creating a
native build environment from scratch is not easy. You would still need a cross
compiler at first to create the native build environment on the target, which
you then use to build the packages. Then, in order to perform the native build
in a reasonable amount of time, you would need a build farm of well-provisioned
target boards, or you may be able to use QEMU to emulate the target.
Meanwhile, in this I will focus on the more mainstream cross compiler environment, which is
relatively easy to set up and administer.
CPU architectures
The toolchain has to be
built according to the capabilities of the target CPU, which includes:
CPU architecture: ARM, MIPS, x86_64, and so on
Big- or little-endian operation: Some CPUs can operate in both modes, but the machine code is different for
each
Floating
point support:
Not all versions of embedded processors implement
a hardware floating point unit, in which case the toolchain has to be
configured to call a software floating point library instead Application Binary Interface (ABI): The calling convention used for passing parameters between function
calls
With many architectures,
the ABI is constant across the family of processors. One notable exception is ARM.
The ARM architecture transitioned to the Extended
Application Binary Interface (EABI)
in the late 2000s, resulting in the
previous ABI being named the Old
Application Binary Interface (OABI).
While the OABI is now obsolete, you continue to see references to EABI. Since
then, the EABI has split into two, based on the way the floating point
parameters are passed. The original EABI uses general purpose (integer)
registers, while the newer Extended
Application Binary Interface Hard-Float (EABIHF) uses floating point registers. The EABIHF is significantly
faster at floating point operations, since it removes the need for copying
between integer and floating point registers, but it is not compatible with
CPUs that do not have a floating point unit. The choice, then, is between two
incompatible ABIs; you cannot mix and match the two, and so you have to decide
at this stage.
GNU uses a
prefix to the name of each tool in the toolchain, which identifies the various
combinations that can be generated. It consists of a tuple of three or four
components separated by dashes, as described here:
CPU: This is the CPU
architecture, such as ARM, MIPS, or x86_64. If the CPU has both endian modes, they may be differentiated by adding el for little-endian or eb for big-endian. Good
examples are little-endian MIPS,
Vendor:
This identifies the provider of the toolchain. Examples include buildroot, poky,
or just
unknown.
Sometimes it is left out altogether. Kernel: For our purposes, it is
always linux.
Operating system: A name for the user space component, which might be gnu or musl. The ABI may be appended here as well, so for
ARM toolchains,
you may
see gnueabi, gnueabihf, musleabi, or musleabihf.
You can find the tuple used
when building the toolchain by using the -dumpmachine option of gcc. For example, you may see the following on the
host computer:
$ gcc -dumpmachine x86_64-linux-gnu
When a native compiler is installed on a machine, it is normal to create
links to each of the tools in the toolchain with no prefixes, so that you can
call the C compiler with the gcc command.
Here is an example using a cross compiler:
$ mipsel-unknown-linux-gnu-gcc -dumpmachine mipsel-unknown-linux-gnu
Choosing the C library
The programming interface
to the Unix operating system is defined in the C language, which is now defined
by the POSIX standards. The C library is the implementation of that interface;
it is the gateway to the kernel for Linux programs, as shown in the following
diagram. Even if you are writing programs in another language, maybe Java or
Python, the respective run-time support libraries will have to call the C library eventually, as shown here:
Whenever the C library
needs the services of the kernel, it will use the kernel system call interface
to transition between user space and kernel space. It is possible to bypass the
C library by making the kernel system calls directly, but that is a lot of
trouble and almost never necessary.
There are several C libraries to choose from. The main options are as
follows:
glibc: This is the standard GNU C library, available at http://www.gnu.org/softwar e/libc. It is big and, until recently, not very
configurable, but it is the most complete implementation of the POSIX
API. The license is LGPL 2.1. musl libc:
This is available at https://www.musl-libc.org. The musl libc library is comparatively
new, but has been gaining a lot of attention as a small and standards-compliant
alternative to GNU libc. It is a good choice for systems with a limited amount of RAM and
storage. It has an MIT license. uClibc-ng:
This is available at https://uclibc-ng.org/. u is really a
Greek mu
character,
indicating that this is the micro controller C library. It was first developed
to work with uClinux (Linux for CPUs without memory management units), but has
since been adapted to be used with full Linux. The uClibc-ng library is a
fork of the original uClibc project (https://uclibc.org/),
eglibc: This is available at http://www.eglibc.org/home. Now
obsolete, eglibc was
a fork of glibc with changes to make it
more suitable for embedded usage. Among other things, eglibc added configuration
options and support for architectures not covered by glibc, in particular the PowerPC
e500 CPU core. The code base from eglibc was merged back into glibc in version 2.20. The eglibc library is no longer
maintained.
So, which to choose? My
advice is to use uClibc-ng only if you are using uClinux. If you have very limited
amount of storage or RAM, then musl libc
is a good choice, otherwise, use glibc, as shown in this flow chart:
You have three choices for
your cross development toolchain: you may find a ready built toolchain that
matches your needs, you can use the one generated by an embedded build tool
which is covered in Selecting a Build System, or
you can create one yourself as described later in this.
A pre-built cross toolchain
is an attractive option in that you only have to download and install it, but
you are limited to the configuration of that particular toolchain and you are
dependent on the person or organization you got it from. Most likely, it will
be one of these:
An SoC or board vendor. Most vendors offer a
Linux toolchain.
A consortium
dedicated to providing system-level support for a given architecture. For
example, Linaro, (https://www.linaro.org/) have pre-built toolchains
for the ARM architecture.
A third-party Linux tool
vendor, such as Mentor Graphics, TimeSys, or MontaVista.
The cross tool packages for
your desktop Linux distribution. For example, Debian-based distributions have
packages for cross compiling for ARM, MIPS, and PowerPC targets.
A binary SDK produced by
one of the integrated embedded build tools. The Yocto Project has some examples
at
http://downloads.yoctoproject.org/releases/yocto/yocto-[version]/toolchain.
A link from a forum that you can't find any
more.
In all of these cases, you
have to decide whether the pre-built toolchain on offer meets your
requirements. Does it use the C library you prefer? Will the provider give you
updates for security fixes and bugs, bearing in mind my comments on support and
updates from, Starting Out. If your answer
is no to any of these, then you should consider creating your own.
Unfortunately, building a
toolchain is no easy task. If you truly want to do the whole thing yourself,
take a look at Cross Linux From Scratch
(http://trac.clfs.org). There you will find
step-by-step instructions on how to create each component.
A simpler alternative is to
use crosstool-NG, which encapsulates the process into a set of scripts and has
a menu-driven frontend. You still need a fair degree of knowledge, though, just
to make the right choices.
It is simpler still to use
a build system such as Buildroot or the Yocto Project, since they generate a
toolchain as part of the build process. This is my preferred solution, as I
have shown in Selecting a Build System.
Building a toolchain using crosstool-NG
Some years ago, Dan Kegel
wrote a set of scripts and makefiles
for generating cross development toolchains and called it crosstool (http://kegel.com/crosstool/). In 2007, Yann E. Morin used that base to
create the next generation of crosstool, crosstool-NG (http://crosstool-ng.github.io/). Today it is by far the most convenient way to
create a stand-alone cross toolchain from source.
Installing crosstool-NG
Before you
begin, you will need a working native toolchain and build tools on your host
PC. To work with crosstool-NG on an Ubuntu host, you will need to install the
packages using the following command:
$ sudo apt-get install automake bison chrpath flex g++ git gperf \ gawk
libexpat1-dev libncurses5-dev libsdl1.2-dev libtool \ python2.7-dev texinfo
Next, get the current
release from the crosstool-NG Git repository. In my examples, I have used
version 1.22.0. Extract it and create the frontend menu system, ct-ng, as shown in the following
commands:
$ git
clone https://github.com/crosstool-ng/crosstool-ng.git
$ cd
crosstool-ng
$ git
checkout crosstool-ng-1.22.0
$ ./bootstrap
$ ./configure
--enable-local
$ make
$ make
install
The --enable-local option means that the
program will be installed into the current directory, which avoids the need for
root permissions, as would be required if you were to install it in the default
location /usr/local/bin. Type ./ct-ng from the current directory
to launch the crosstool menu.
Building a toolchain for BeagleBone Black
Crosstool-NG
can build many different combinations of toolchains. To make the initial
configuration easier, it comes with a set of samples that cover many of the
common use-cases. Use ./ct-ng list-samples to generate the list.
The BeagleBone Black has a
TI AM335x SoC, which contains an ARM Cortex A8 core and a VFPv3 floating point
unit. Since the BeagleBone Black has plenty of RAM and storage, we can use glibc as the C library. The
closest sample is arm-cortex_a8-linux-gnueabi. You can see the default
configuration by prefixing the name
with show-, as demonstrated here:
$ ./ct-ng show-arm-cortex_a8-linux-gnueabi [L..]
arm-cortex_a8-linux-gnueabi
OS : linux-4.3
Companion libs
: gmp-6.0.0a mpfr-3.1.3 mpc-1.0.3 libelf-0.8.13 expat-2.1.0 ncurses-6.0
binutils : binutils-2.25.1
C compilers : gcc | 5.2.0
Languages : C,C++
C library : glibc-2.22 (threads: nptl)
Tools : dmalloc-5.5.2 duma-2_5_15 gdb-7.10
ltrace-0.7.3 strace-4.10
This is a close match with
our requirements, except that it using the eabi binary interface, which passes floating point
arguments in integer registers. We would prefer to use hardware floating point
registers for that purpose because it would speed up function calls that have
float and double parameter types. You can change the configuration later on, so
for now you should select this target configuration:
$
./ct-ng arm-cortex_a8-linux-gnueabi
At this point, you can
review the configuration and make changes using the configuration menu command menuconfig:
$
./ct-ng menuconfig
The menu system is based on
the Linux kernel menuconfig, and so navigation of the
user interface will be familiar to anyone who has configured a kernel. If not,
menuconfig.
There are two configuration
changes that I would recommend you make at this point:
In Paths and misc options, disable Render the
toolchain read-only
(CT_INSTALL_DIR_RO)
In Target options | Floating point, select hardware
(FPU) (CT_ARCH_FLOAT_HW)
The first is necessary if
you want to add libraries to the toolchain after it has been installed, which I
describe later in this . The second selects the eabihf binary interface for the
reasons discussed earlier. The names in parentheses are the configuration
labels stored in the configuration file. When you have made the changes, exit
the menuconfig menu and save the
configuration as you do so.
Now you can use
crosstool-NG to get, configure, and build the components according to your
specification, by typing the following command:
$
./ct-ng build
The build will take about half an hour, after which you will find your
toolchain is
present in ~/x-tools/arm-cortex_a8-linux-gnueabihf.
Building a toolchain for
QEMU
On the QEMU target, you
will be emulating an ARM-versatile PB evaluation board that has an ARM926EJ-S
processor core, which implements the ARMv5TE instruction set. You need to
generate a crosstool-NG toolchain that matches with the specification. The procedure
is very similar to the one for the BeagleBone Black.
You begin by running ./ct-ng list-samples to find a good base
configuration to work from. There isn't an exact fit, so use a generic target, arm-unknown-linux-gnueabi. You select it as shown,
running
distclean first
to make sure that there are no
artifacts left over from a previous build:
$
./ct-ng distclean
$
./ct-ng arm-unknown-linux-gnueabi
As with the BeagleBone
Black, you can review the configuration and make changes using the configuration
menu command ./ct-ng
menuconfig.
There is only one change necessary:
In Paths and misc options, disable Render the
toolchain read-only
(CT_INSTALL_DIR_RO)
Now, build the toolchain with the command as shown here:
$
./ct-ng build
As before, the build will take about half an hour. The toolchain will be
installed
in ~/x-tools/arm-unknown-linux-gnueabi.
Anatomy of a toolchain
To get an idea of what is
in a typical toolchain, I want to examine the crosstool-NG toolchain you have
just created. The examples use the ARM Cortex A8 toolchain created for the
BeagleBone Black, which has the prefix arm-cortex_a8-linux-gnueabihf-. If you built the
ARM926EJ-S toolchain for the QEMU target, then the prefix will be arm-unknown-linux-gnueabi instead.
The ARM Cortex A8 toolchain is in the directory ~/x-tools/arm-cortex_a8-linux-
gnueabihf/bin. In
there you will find the cross compiler, arm-cortex_a8-linux-gnueabihf-
gcc. To make use of it, you
need to add the directory to your path using the following command:
$
PATH=~/x-tools/arm-cortex_a8-linux-gnueabihf/bin:$PATH
Now you can take a simple
helloworld program, which in the C language looks like this:
#include <stdio.h>
#include <stdlib.h>
int main (int argc, char *argv[])
{
printf ("Hello, world!\n");
return 0;
}
You compile it like this:
$
arm-cortex_a8-linux-gnueabihf-gcc helloworld.c -o helloworld
You can confirm that it has
been cross compiled by using the file command to print the type of the file:
$ file
helloworld
helloworld: ELF 32-bit LSB executable, ARM,
EABI5 version 1 (SYSV), dynamically linked
Finding out about your cross compiler
Imagine that you have just
received a toolchain and that you would like to know more about how it was
configured. You can find out a lot by querying gcc. For example, to find the version, you use --version:
$
arm-cortex_a8-linux-gnueabihf-gcc --version
arm-cortex_a8-linux-gnueabihf-gcc
(crosstool-NG crosstool-ng-1.22.0) 5.2.0 Copyright (C) 2015 Free Software
Foundation, Inc.
This is free software; see
the source for copying conditions. There is NO warranty; not even for
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
To find how it was configured, use -v:
$ arm-cortex_a8-linux-gnueabihf-gcc -v Using built-in specs. COLLECT_GCC=arm-cortex_a8-linux-gnueabihf-gcc
COLLECT_LTO_WRAPPER=/home/chris/x-tools/arm-cortex_a8-linux-gnueabihf/libexec/gcc/arm-c
Target: arm-cortex_a8-linux-gnueabihf
Configured with:
/home/chris/crosstool-ng/.build/src/gcc-5.2.0/configure --build=x86_64
Thread model: posix
gcc version 5.2.0 (crosstool-NG
crosstool-ng-1.22.0)
There is a lot of output there, but the interesting things to note are:
--with-sysroot=/home/chris/x-tools/arm-cortex_a8-linux-gnueabihf/arm-cortex_a8-linux-gnueabihf/sysroot: This is the default sysroot directory; see the
following
section for
an explanation
--enable-languages=c,c++: Using this, we have both
C and C++ languages
enabled
--with-cpu=cortex-a8: The code is generated for an ARM Cortex A8
core
--with-float=hard: Generates opcodes for the
floating point unit and uses the VFP
registers for parameters
--enable-threads=posix: This enables the POSIX threads
These are the default
settings for the compiler. You can override most of them on the gcc command line. For example,
if you want to compile for a different CPU, you can override the configured
setting, –-with-cpu, by adding -mcpu to the command line, as
follows:
You can print out the range of architecture-specific options available
using --
target-help, as follows:
$
arm-cortex_a8-linux-gnueabihf-gcc --target-help
You may be wondering if it matters that you get
the configuration exactly right at this point, since you can always change it as
shown here. The answer depends on the way you anticipate using it. If you plan
to create a new toolchain for each target, then it makes sense to set
everything up at the beginning, because it will reduce the risks of getting it
wrong later on. Jumping ahead a little to, Selecting
a Build System, I call this the Buildroot philosophy. If, on the other hand, you want to build a toolchain
that is generic and you are prepared to provide the correct settings when you
build for a particular target, then you should make the base toolchain generic,
which is the way the Yocto Project handles things. The preceding examples
follow the Buildroot philosophy.
The sysroot, library, and
header files
The toolchain sysroot is a directory which
contains subdirectories for libraries, header files, and other configuration
files. It can be set when the toolchain is configured through --with-sysroot=, or it can be set on the
command line using -- sysroot=. You can see the location
of the default sysroot by using -print-sysroot:
$ arm-cortex_a8-linux-gnueabihf-gcc -print-sysroot /home/chris/x-tools/arm-cortex_a8-linux-gnueabihf/arm-cortex_a8-linux-gnueabihf/sysroot
You will find the following subdirectories in sysroot:
lib: Contains the shared
objects for the C library and the dynamic linker/loader, ld-linux
usr/lib, the static library
archive files for the C library, and any other libraries that may be installed subsequently usr/include: Contains the headers for
all the libraries
usr/bin: Contains the utility
programs that run on the target, such as the ldd command
use/share: Used for localization and internationalization
sbin: Provides the
ldconfig utility, used
to optimize library loading paths
Plainly, some of these are
needed on the development host to compile programs, and others - for example,
the shared libraries and ld-linux
- are needed on the target at runtime.
Other tools in the
toolchain
The following table shows
various other components of a GNU toolchain, together with a brief description:
Command
|
Description
|
|
Converts program addresses
into filenames and numbers by reading the
|
||
addr2line
|
debug symbol tables in an executable file. It
is very useful when
|
|
decoding addresses printed out in a system
crash report.
|
||
ar
|
The archive utility is used to create static
libraries.
|
|
as
|
This is the GNU assembler.
|
|
c++filt
|
This is used to demangle C++ and Java symbols.
|
|
This is the C preprocessor
and is used to expand #define, #include,
|
||
cpp
|
and
other similar directives. You seldom need to use this by
|
|
itself.
|
||
elfedit
|
This is used to update the ELF header of the
ELF files.
|
|
g++
|
This is the GNU C++
frontend, which assumes that source files
|
|
contain C++ code.
|
||
gcc
|
This is the GNU C frontend,
which assumes that source files
|
|
contain C code.
|
||
gcov
|
This is a code coverage tool.
|
|
gdb
|
This is the GNU debugger.
|
|
gprof
|
This is a program profiling tool.
|
|
ld
|
This is the GNU linker.
|
|
nm
|
This lists symbols from object files.
|
|
objcopy
|
This is used to copy and translate object
files.
|
|
objdump
|
This is used to display information from
object files.
|
|
This creates or modifies an
index in a static library, making the
|
|
linking stage faster.
|
|
readelf
|
This displays information about files in ELF
object format.
|
size
|
This lists section sizes and the total size.
|
strings
|
This displays strings of printable characters
in files.
|
This is used to strip an
object file of debug symbol tables, thus
|
|
strip
|
making
it smaller. Typically, you would strip all the executable
|
code that is put onto the target.
|
Looking at the components of the C library
The C library is not a
single library file. It is composed of four main parts that together implement
the POSIX API:
libc: The main C library that contains the well-known POSIX functions
such as printf, open, close, read, write, and so on
libm: Contains maths functions such as cos, exp, and log
libpthread: Contains all the POSIX thread functions with names beginning
with pthread_
librt: Has the real-time
extensions to POSIX, including shared memory and asynchronous I/O
The first one, libc, is always linked in but
the others have to be explicitly linked with the -l option. The parameter to -l is the library name with lib stripped off. For example,
a program that calculates a sine function by calling sin() would be linked with libm using -lm:
$
arm-cortex_a8-linux-gnueabihf-gcc myprog.c -o myprog -lm
You can verify
which libraries have been linked in this or any other program by using the readelf command:
$ arm-cortex_a8-linux-gnueabihf-readelf -a myprog | grep "Shared
library" 0x00000001
(NEEDED) Shared library: [libm.so.6]
0x00000001 (NEEDED) Shared library: [libc.so.6]
Shared libraries need a runtime linker, which you can expose using:
$ arm-cortex_a8-linux-gnueabihf-readelf -a myprog | grep "program
interpreter" [Requesting program interpreter: /lib/ld-linux-armhf.so.3]
This is so useful that I
have a script file named list-libs,
which you will find in the code archive in MELP/list-libs. It contains the following commands:
#!/bin/sh
${CROSS_COMPILE}readelf -a $1 | grep
"program interpreter"
${CROSS_COMPILE}readelf -a $1 | grep
"Shared library"
Linking with libraries – static and dynamic linking
Any application you write
for Linux, whether it be in C or C++, will be linked with the C library libc. This is so fundamental
that you don't even have to tell gcc or g++ to do it because it always links libc. Other libraries that you
may want to link with have to be explicitly named through the -l option.
The library code can be
linked in two different ways: statically, meaning that all the library
functions your application calls and their dependencies are pulled from the
library archive and bound into your executable; and dynamically, meaning that
references to the library files and functions in those files are generated in
the code but the actual linking is done dynamically at runtime. You will find
the code for the examples that follow in the code archive in
Static libraries
Static linking is useful in a few circumstances.
For example, if you are building a small system which consists of only BusyBox
and some script files, it is simpler to link BusyBox statically and avoid
having to copy the runtime library files and linker. It will also be smaller
because you only link in the code that your application uses rather than
supplying the entire C library. Static linking is also useful if you need to
run a program before the filesystem that holds the runtime libraries is
available.
You tell to link all the libraries statically by adding -static to the command line:
$
arm-cortex_a8-linux-gnueabihf-gcc -static helloworld.c -o helloworld-static
You will note that the size of the binary increases dramatically:
$ ls -l
-rwxrwxr-x 1 chris chris 5884 Mar 5 09:56
helloworld
-rwxrwxr-x 1 chris chris 614692 Mar 5 10:27
helloworld-static
Static linking pulls code
from a library archive, usually named lib[name].a. In the preceding case, it is libc.a, which is in [sysroot]/usr/lib:
$ export
SYSROOT=$(arm-cortex_a8-linux-gnueabihf-gcc -print-sysroot)
$ cd
$SYSROOT
$ ls -l
usr/lib/libc.a
-rw-r--r-- 1 chris chris
3457004 Mar 3 15:21 usr/lib/libc.a
Note that the syntax export
SYSROOT=$(arm-cortex_a8-linux-gnueabihf-gcc -print-sysroot)
places the path to the
sysroot in the shell variable, SYSROOT, which makes the example a little
clearer.
Creating a
static library is as simple as creating an archive of object files using the ar command. If I have two
source files named test1.c and test2.c, and I want to create a
static library named libtest.a, then I would do the following:
$ arm-cortex_a8-linux-gnueabihf-gcc
-c test1.c
$ arm-cortex_a8-linux-gnueabihf-gcc
-c test2.c
$ arm-cortex_a8-linux-gnueabihf-ar
rc libtest.a test1.o test2.o
$ ls -l total
24
-rw-rw-r-- 1 chris chris
2392 Oct 9 09:28 libtest.a -rw-rw-r-- 1 chris chris 116 Oct 9 09:26 test1.c
-rw-rw-r-- 1 chris chris
1080 Oct 9 09:27 test1.o -rw-rw-r-- 1 chris chris 121 Oct 9 09:26 test2.c
-rw-rw-r-- 1 chris chris 1088 Oct 9 09:27 test2.o
Then I could link libtest into my helloworld program,
using:
$ arm-cortex_a8-linux-gnueabihf-gcc helloworld.c -ltest \ -L../libs
-I../libs -o helloworld
Shared libraries
A more common way to deploy
libraries is as shared objects that are linked at runtime, which makes more
efficient use of storage and system memory, since only one copy of the code
needs to be loaded. It also makes it easy to update the library files without
having to re-link all the programs that use them.
The object code for a
shared library must be position-independent, so that the runtime linker is free
to locate it in memory at the next free address. To do this, add the -fPIC parameter to gcc, and then
link it using the -shared option:
$ arm-cortex_a8-linux-gnueabihf-gcc
-fPIC -c test1.c
$ arm-cortex_a8-linux-gnueabihf-gcc
-fPIC -c test2.c
$ arm-cortex_a8-linux-gnueabihf-gcc
-shared -o libtest.so test1.o test2.o
This creates the shared
library, libtest.so. To link an application
with this library, you add -ltest,
exactly as in the static case mentioned in the preceding section, but this time
the code is not included in the executable. Instead, there is a reference to
the library that the runtime linker will have to resolve:
$ arm-cortex_a8-linux-gnueabihf-gcc helloworld.c -ltest \ -L../libs
-I../libs -o helloworld
$ MELP/list-libs
helloworld
[Requesting program
interpreter: /lib/ld-linux-armhf.so.3]
0x00000001 (NEEDED) Shared library: [libtest.so]
0x00000001 (NEEDED) Shared library: [libc.so.6]
The runtime linker for this
program is /lib/ld-linux-armhf.so.3, which must be present in
the target's filesystem. The linker will look for libtest.so in the default search
path: /lib and /usr/lib. If you want it to look
for libraries in other directories as well, you can place a colon-separated
list of paths in the shell variable LD_LIBRARY_PATH:
# export
LD_LIBRARY_PATH=/opt/lib:/opt/usr/lib
Understanding shared library version numbers
One of the benefits of
shared libraries is that they can be updated independently of the programs that
use them. Library updates are of two types: those that fix bugs or add new
functions in a backwards-compatible way, and those that break compatibility
with existing applications. GNU/Linux has a versioning scheme to handle both
these cases.
Each library has a release version and an
interface number. The release version is simply a string that is appended to
the library name; for example, the JPEG image library libjpeg is currently at release
8.0.2 and so the library is named
libjpeg.so.8.0.2. There is a symbolic link
named
libjpeg.so to libjpeg.so.8.0.2, so that when you compile a
program with –ljpeg, you link with the current
version. If you install version 8.0.3, the link is updated and you will link
with that one instead.
Now suppose that version 9.0.0. comes along and
that breaks the backwards compatibility. The link from libjpeg.so now points to libjpeg.so.9.0.0, so that any new programs
are linked with the new version, possibly throwing compile errors when the
interface to libjpeg changes, which the
developer can fix. Any programs on the target that are not recompiled are going
to fail in some way, because they are still using the old interface. This is
where an object known as the soname
helps. The soname encodes the interface number when the library was built and
is used by the runtime linker when it loads the library. It is formatted as <library name>.so.<interface
number>.
For
libjpeg.so.8.0.2,
the soname is
libjpeg.so.8:
$ readelf -a /usr/lib/libjpeg.so.8.0.2 | grep SONAME 0x000000000000000e (SONAME)
Library soname: [libjpeg.so.8]
Any program compiled with
it will request libjpeg.so.8 at runtime, which will be
a symbolic link on the target to libjpeg.so.8.0.2. When version 9.0.0 of libjpeg is installed, it will have
a soname of libjpeg.so.9, and so it is possible to
have two incompatible versions of the same library installed on the same
system. Programs that were linked with libjpeg.so.8.*.* will load libjpeg.so.8, and those
This is why, when you look
at the directory listing of <sysroot>/usr/lib/libjpeg*, you find these four files:
libjpeg.a: This is the library
archive used for static linking libjpeg.so -> libjpeg.so.8.0.2: This is a symbolic link, used for dynamic linking
libjpeg.so.8 ->
libjpeg.so.8.0.2:
This is a symbolic link, used when loading the library at runtime
libjpeg.so.8.0.2: This is the actual shared
library, used at both compile time and runtime
The first two are only
needed on the host computer for building and the last two are needed on the
target at runtime.
The art of cross compiling
Having a working cross toolchain is the starting
point of a journey, not the end of it. At some point, you will want to begin
cross compiling the various tools, applications, and libraries that you need on
your target. Many of them will be open source packages—each of which has its
own method of compiling and its own peculiarities. There are some common build
systems, including:
Pure makefiles, where the
toolchain is usually controlled by the make
variable CROSS_COMPILE
The GNU build system known as Autotools
I will cover only the first
two here since these are the ones needed for even a basic embedded Linux
system. For CMake, there are some excellent resources on the CMake website
referenced in the preceding point.
Simple makefiles
Some important packages are
very simple to cross compile, including the Linux kernel, the U-Boot
bootloader, and BusyBox. For each of these, you only need to put the toolchain
prefix in the make variable CROSS_COMPILE, for example arm-cortex_a8-linux-gnueabi-. Note the trailing dash -.
So, to compile BusyBox, you would type:
$ make
CROSS_COMPILE=arm-cortex_a8-linux-gnueabihf-
Or, you can set it as a shell variable:
$ export
CROSS_COMPILE=arm-cortex_a8-linux-gnueabihf-
$ make
In the case of
U-Boot and Linux, you also have to set the make variable ARCH to one of the machine
architectures they support, which I will cover in, All About Bootloaders,
and, Configuring and Building the Kernel.
Autotools
The name Autotools refers to a group of tools
that are used as the build system in many open source projects. The components,
together with the appropriate project pages, are:
The role of Autotools is to
smooth over the differences between the many different types of systems that
the package may be compiled for, accounting for different versions of
compilers, different versions of libraries, different locations of header
files, and dependencies with other packages. Packages that use Autotools come
with a script named configure
that checks dependencies and generates makefiles according to what it finds.
The configure script may also give you
the opportunity to enable or disable certain features. You can find the options
on offer by running ./configure --help.
To configure, build, and
install a package for the native operating system, you would typically run the
following three commands:
$ ./configure
$ make
$ sudo
make install
Autotools is able to handle
cross development as well. You can influence the behavior of the configure
script by setting these shell variables:
CC: The C compiler command
CFLAGS: Additional C compiler flags
LDFLAGS: Additional linker flags;
for example, if you have libraries in a non-standard directory <lib dir>, you would add it to the
library search path by adding -L<lib dir>
LIBS: Contains a list of
additional libraries to pass to the linker; for instance, -lm for the math library
I<include dir> to search for headers in a non-standard directory <include dir>
CPP: The C preprocessor to use
Sometimes it is sufficient to set only the CC variable, as follows:
$
CC=arm-cortex_a8-linux-gnueabihf-gcc ./configure
At other times, that will result in an error like this:
[...]
checking whether we are cross compiling...
configure: error: in '/home/chris/MELP/build
configure: error: cannot run C compiled
programs.
If you meant to cross compile, use '--host'.
See 'config.log' for more details
The reason for the failure
is that configure often tries to discover the capabilities of the toolchain by
compiling snippets of code and running them to see what happens, which cannot
work if the program has been cross compiled. Nevertheless, there is a hint in
the error message on how to solve the problem. Autotools understands three
different types of machines that may be involved when compiling a package:
Build is the computer that builds
the package, which defaults to the current
machine.
Host is the computer the program will run on; for a
native compile, this is left blank
and it defaults to be the same computer as build. When you are cross compiling,
set it to be the tuple of your toolchain.
Target is
the computer the program will generate code for; you would set this when building a cross compiler,
for example.
So, to cross compile, you just need to override the host, as follows:
$ CC=arm-cortex_a8-linux-gnueabihf-gcc
\
./configure
--host=arm-cortex_a8-linux-gnueabihf
One final thing to note is
that the default install directory is <sysroot>/usr/local/*. You would usually install
it in <sysroot>/usr/*, so that the header files
and libraries would be picked up from their default locations. The complete
command to configure a typical Autotools package is as follows:
$
CC=arm-cortex_a8-linux-gnueabihf-gcc \
An example: SQLite
The SQLite library
implements a simple relational database and is quite popular on embedded
devices. You begin by getting a copy of SQLite:
$ wget http://www.sqlite.org/2015/sqlite-autoconf-3081101.tar.gz
$ tar xf
sqlite-autoconf-3081101.tar.gz
$ cd
sqlite-autoconf-3081101
Next, run the configure script:
$
CC=arm-cortex_a8-linux-gnueabihf-gcc \
./configure
--host=arm-cortex_a8-linux-gnueabihf --prefix=/usr
That seems to work! If it
had failed, there would be error messages printed to the Terminal and recorded
in config.log. Note that several
makefiles have been created, so now you can build it:
$ make
Finally, you install it
into the toolchain directory by setting the make variable DESTDIR. If you don't, it will try to install it into
the host computer's
/usr directory, which is not what you want:
$ make
DESTDIR=$(arm-cortex_a8-linux-gnueabihf-gcc -print-sysroot) install
You may find that the final
command fails with a file permissions error. A crosstool-NG toolchain is
read-only by default, which is why it is useful to set CT_INSTALL_DIR_RO to y when building it. Another
common problem is that the toolchain
is installed in a system directory, such as /opt or /usr/local, in which case you will need root permissions when running
the install.
After installing, you
should find that various files have been added to your toolchain:
<sysroot>/usr/bin:
sqlite3:
This is a command-line interface for SQLite that you can install and run on the target
<sysroot>/usr/lib: libsqlite3.so.0.8.6,
libsqlite3.so.0, libsqlite3.so, libsqlite3.la,
<sysroot>/usr/lib/pkgconfig: sqlite3.pc: This is the package configuration file,
as described in the following section
<sysroot>/usr/lib/include: sqlite3.h,
sqlite3ext.h: These are the
header files
<sysroot>/usr/share/man/man1: sqlite3.1: This is the manual page
Now you can compile
programs that use sqlite3 by adding -lsqlite3 at the link stage:
$
arm-cortex_a8-linux-gnueabihf-gcc -lsqlite3 sqlite-test.c -o sqlite-test
Here, sqlite-test.c is a hypothetical program
that calls SQLite functions. Since sqlite3 has been installed into the sysroot, the compiler will find
the header and
library
files without any problem. If they had been installed elsewhere, you would have
had to add -L<lib
dir>
and -I<include
dir>.
Naturally,
there will be runtime dependencies as well, and you will have to install the appropriate
files into the target directory as described in, Building
a Root Filesystem.
Package configuration
Tracking package dependencies is quite complex. The package
configuration
utility pkg-config (https://www.freedesktop.org/wiki/Software/pkg-config/) helps track which
packages are installed and which compile flags each needs by keeping a database
of Autotools packages in [sysroot]/usr/lib/pkgconfig. For instance, the one for SQLite3 is named sqlite3.pc and contains essential
information needed by other packages that need to make use of it:
$ cat
$(arm-cortex_a8-linux-gnueabihf-gcc
-print-sysroot)/usr/lib/pkgconfig/sqlite3.pc
# Package Information for
pkg-config
prefix=/usr
exec_prefix=${prefix}
libdir=${exec_prefix}/lib
includedir=${prefix}/include
Name: SQLite
Description: SQL database engine
Version: 3.8.11.1
Libs: -L${libdir} -lsqlite3
Libs.private: -ldl -lpthread
Cflags: -I${includedir}
You can use pkg-config to extract information in
a form that you can feed straight to gcc. In the case of a library like libsqlite3, you want to know the
library name (--libs) and any special C flags (--cflags):
$
pkg-config sqlite3 --libs --cflags
Package sqlite3 was not
found in the pkg-config search path. Perhaps you should add the directory
containing `sqlite3.pc' to the PKG_CONFIG_PATH environment variable No package
'sqlite3' found
Oops! That failed because
it was looking in the host's sysroot
and the development package for libsqlite3
has not been installed on the host. You need to point it at the sysroot of the target toolchain by
setting the shell variable
PKG_CONFIG_LIBDIR:
$ export PKG_CONFIG_LIBDIR=$(arm-cortex_a8-linux-gnueabihf-gcc \
-print-sysroot)/usr/lib/pkgconfig
$
pkg-config sqlite3 --libs --cflags -lsqlite3
Now the output is -lsqlite3. In this case, you knew that already, but
generally you
$ export PKG_CONFIG_LIBDIR=$(arm-cortex_a8-linux-gnueabihf-gcc \
-print-sysroot)/usr/lib/pkgconfig
$ arm-cortex_a8-linux-gnueabihf-gcc $(pkg-config sqlite3 --cflags --libs)
\ sqlite-test.c -o sqlite-test
Problems with cross
compiling
The sqlite3 is a
well-behaved package and cross compiles nicely, but not all packages are the
same. Typical pain points include:
Home-grown build systems; zlib, for example, has a
configure script, but it does not behave like the Autotools configure described
in the previous section
Configure scripts that read
pkg-config information, headers, and
other files from the host, disregarding the --host override
Scripts that insist on trying to run cross
compiled code
Each case requires careful
analysis of the error and additional parameters to the configure script to
provide the correct information, or patches to the code to avoid the problem
altogether. Bear in mind that one package may have many dependencies,
especially with programs that have a graphical interface using GTK or QT, or
that handle multimedia content. As an example, mplayer, which is a popular tool for playing multimedia
content, has dependencies on over 100 libraries. It would take weeks of effort
to build them all.
Therefore, I would not
recommend manually cross compiling components for the target in this way,
except when there is no alternative or the number of packages to build is
small. A much better approach is to use a build tool such as Buildroot or the
Yocto Project, or avoid the problem altogether by setting up a native build
environment for your target architecture. Now you can see why distributions
like Debian are always compiled natively.
TLDR;
The toolchain is always
your starting point; everything that follows from that is dependent on having a
working, reliable toolchain.
Most embedded build environments are based on a
cross development toolchain, which creates a clear separation between a
powerful host computer building the code and a target computer on which it
runs. The toolchain itself consists of the GNU binutils, a C compiler from the
GNU compiler collection—and quite likely the C++ compiler as well—plus one of
the C libraries I have described. Usually, the GNU debugger, GDB, will be generated at this
point, Also, keep a watch out for the Clang compiler, as
it will develop over the next few years.
You may start with nothing
but a toolchain—perhaps built using crosstool-NG or downloaded from Linaro—and
use it to compile all the packages that you need on your target, accepting the
amount of hard work this will entail. Or you may obtain the toolchain as part
of a distribution which includes a range of packages. A distribution can be
generated from source code using a build system such as Buildroot or the Yocto
Project, or it can be a binary distribution from a third party, maybe a
commercial enterprise like Mentor Graphics, or an open source project such as
the Denx ELDK. Beware of toolchains or distributions that are offered to you
for free as part of a hardware package; they are often poorly configured and
not maintained. In any case, you should make your choice according to your
situation, and then be consistent in its use throughout the project.
Once you have a
toolchain, you can use it to build the other components of your embedded Linux
system. In the next, you will learn about the bootloader, which brings
your device to life and begins the boot process.
Comments
Post a Comment