This section covers how new packages (userspace libraries or applications) can be integrated into Buildroot. It also shows how existing packages are integrated, which is needed for fixing issues or tuning their configuration.
When you add a new package, be sure to test it in various conditions (see Section 18.25.3, “How to test your package”) and also check it for coding style (see Section 18.25.2, “How to check the coding style”).
First of all, create a directory under the package
directory for
your software, for example libfoo
.
Some packages have been grouped by topic in a sub-directory:
x11r7
, qt5
and gstreamer
. If your package fits in
one of these categories, then create your package directory in these.
New subdirectories are discouraged, however.
For the package to be displayed in the configuration tool, you need to
create a Config file in your package directory. There are two types:
Config.in
and Config.in.host
.
For packages used on the target, create a file named Config.in
. This
file will contain the option descriptions related to our libfoo
software
that will be used and displayed in the configuration tool. It should basically
contain:
config BR2_PACKAGE_LIBFOO bool "libfoo" help This is a comment that explains what libfoo is. The help text should be wrapped. http://foosoftware.org/libfoo/
The bool
line, help
line and other metadata information about the
configuration option must be indented with one tab. The help text
itself should be indented with one tab and two spaces, lines should
be wrapped to fit 72 columns, where tab counts for 8, so 62 characters
in the text itself. The help text must mention the upstream URL of the
project after an empty line.
As a convention specific to Buildroot, the ordering of the attributes is as follows:
bool
, string
… with the prompt
default
value(s)
depends on
form
depends on
form
depends on
form
select
form
You can add other sub-options into a if BR2_PACKAGE_LIBFOO…endif
statement to configure particular things in your software. You can look at
examples in other packages. The syntax of the Config.in
file is the same
as the one for the kernel Kconfig file. The documentation for this syntax is
available at http://kernel.org/doc/Documentation/kbuild/kconfig-language.txt
Finally you have to add your new libfoo/Config.in
to
package/Config.in
(or in a category subdirectory if you decided to
put your package in one of the existing categories). The files
included there are sorted alphabetically per category and are NOT
supposed to contain anything but the bare name of the package.
source "package/libfoo/Config.in"
Some packages also need to be built for the host system. There are two options here:
host-foo
to the target package’s BAR_DEPENDENCIES
variable. No
Config.in.host
file should be created.
The host package should be explicitly selectable by the user from
the configuration menu. In this case, create a Config.in.host
file
for that host package:
config BR2_PACKAGE_HOST_FOO bool "host foo" help This is a comment that explains what foo for the host is. http://foosoftware.org/foo/
The same coding style and options as for the Config.in
file are valid.
Finally you have to add your new libfoo/Config.in.host
to
package/Config.in.host
. The files included there are sorted alphabetically
and are NOT supposed to contain anything but the bare name of the package.
source "package/foo/Config.in.host"
The host package will then be available from the Host utilities
menu.
The Config.in
file of your package must also ensure that
dependencies are enabled. Typically, Buildroot uses the following
rules:
select
type of dependency for dependencies on
libraries. These dependencies are generally not obvious and it
therefore make sense to have the kconfig system ensure that the
dependencies are selected. For example, the libgtk2 package uses
select BR2_PACKAGE_LIBGLIB2
to make sure this library is also
enabled.
The select
keyword expresses the dependency with a backward
semantic.
depends on
type of dependency when the user really needs to
be aware of the dependency. Typically, Buildroot uses this type of
dependency for dependencies on target architecture, MMU support and
toolchain options (see Section 18.2.4, “Dependencies on target and toolchain options”),
or for dependencies on "big" things, such as the X.org system.
The depends on
keyword expresses the dependency with a forward
semantic.
Note. The current problem with the kconfig language is that these two dependency semantics are not internally linked. Therefore, it may be possible to select a package, whom one of its dependencies/requirement is not met.
An example illustrates both the usage of select
and depends on
.
config BR2_PACKAGE_RRDTOOL bool "rrdtool" depends on BR2_USE_WCHAR select BR2_PACKAGE_FREETYPE select BR2_PACKAGE_LIBART select BR2_PACKAGE_LIBPNG select BR2_PACKAGE_ZLIB help RRDtool is the OpenSource industry standard, high performance data logging and graphing system for time series data. http://oss.oetiker.ch/rrdtool/ comment "rrdtool needs a toolchain w/ wchar" depends on !BR2_USE_WCHAR
Note that these two dependency types are only transitive with the dependencies of the same kind.
This means, in the following example:
config BR2_PACKAGE_A bool "Package A" config BR2_PACKAGE_B bool "Package B" depends on BR2_PACKAGE_A config BR2_PACKAGE_C bool "Package C" depends on BR2_PACKAGE_B config BR2_PACKAGE_D bool "Package D" select BR2_PACKAGE_B config BR2_PACKAGE_E bool "Package E" select BR2_PACKAGE_D
Package C
will be visible if Package B
has been
selected, which in turn is only visible if Package A
has been
selected.
Package E
will select Package D
, which will select
Package B
, it will not check for the dependencies of Package B
,
so it will not select Package A
.
Package B
is selected but Package A
is not, this violates
the dependency of Package B
on Package A
. Therefore, in such a
situation, the transitive dependency has to be added explicitly:
config BR2_PACKAGE_D bool "Package D" depends on BR2_PACKAGE_A select BR2_PACKAGE_B config BR2_PACKAGE_E bool "Package E" depends on BR2_PACKAGE_A select BR2_PACKAGE_D
Overall, for package library dependencies, select
should be
preferred.
Note that such dependencies will ensure that the dependency option
is also enabled, but not necessarily built before your package. To do
so, the dependency also needs to be expressed in the .mk
file of the
package.
Further formatting details: see the coding style.
Many packages depend on certain options of the toolchain: the choice of C library, C++ support, thread support, RPC support, wchar support, or dynamic library support. Some packages can only be built on certain target architectures, or if an MMU is available in the processor.
These dependencies have to be expressed with the appropriate depends
on statements in the Config.in file. Additionally, for dependencies on
toolchain options, a comment
should be displayed when the option is
not enabled, so that the user knows why the package is not available.
Dependencies on target architecture or MMU support should not be
made visible in a comment: since it is unlikely that the user can
freely choose another target, it makes little sense to show these
dependencies explicitly.
The comment
should only be visible if the config
option itself would
be visible when the toolchain option dependencies are met. This means
that all other dependencies of the package (including dependencies on
target architecture and MMU support) have to be repeated on the
comment
definition. To keep it clear, the depends on
statement for
these non-toolchain option should be kept separate from the depends on
statement for the toolchain options.
If there is a dependency on a config option in that same file (typically
the main package) it is preferable to have a global if … endif
construct rather than repeating the depends on
statement on the
comment and other config options.
The general format of a dependency comment
for package foo is:
foo needs a toolchain w/ featA, featB, featC
for example:
mpd needs a toolchain w/ C++, threads, wchar
or
crda needs a toolchain w/ threads
Note that this text is kept brief on purpose, so that it will fit on a 80-character terminal.
The rest of this section enumerates the different target and toolchain options, the corresponding config symbols to depend on, and the text to use in the comment.
Target architecture
BR2_powerpc
, BR2_mips
, … (see arch/Config.in
)
MMU support
BR2_USE_MMU
Gcc _sync*
built-ins used for atomic operations. They are
available in variants operating on 1 byte, 2 bytes, 4 bytes and 8
bytes. Since different architectures support atomic operations on
different sizes, one dependency symbol is available for each size:
BR2_TOOLCHAIN_HAS_SYNC_1
for 1 byte,
BR2_TOOLCHAIN_HAS_SYNC_2
for 2 bytes,
BR2_TOOLCHAIN_HAS_SYNC_4
for 4 bytes, BR2_TOOLCHAIN_HAS_SYNC_8
for 8 bytes.
Gcc _atomic*
built-ins used for atomic operations.
BR2_TOOLCHAIN_HAS_ATOMIC
.
Kernel headers
BR2_TOOLCHAIN_HEADERS_AT_LEAST_X_Y
, (replace
X_Y
with the proper version, see toolchain/Config.in
)
headers >= X.Y
and/or headers <= X.Y
(replace
X.Y
with the proper version)
GCC version
BR2_TOOLCHAIN_GCC_AT_LEAST_X_Y
, (replace
X_Y
with the proper version, see toolchain/Config.in
)
gcc >= X.Y
and/or gcc <= X.Y
(replace
X.Y
with the proper version)
Host GCC version
BR2_HOST_GCC_AT_LEAST_X_Y
, (replace
X_Y
with the proper version, see Config.in
)
C library
BR2_TOOLCHAIN_USES_GLIBC
,
BR2_TOOLCHAIN_USES_MUSL
, BR2_TOOLCHAIN_USES_UCLIBC
foo needs a glibc toolchain
, or foo needs a glibc
toolchain w/ C++
C++ support
BR2_INSTALL_LIBSTDCPP
C++
D support
BR2_TOOLCHAIN_HAS_DLANG
Dlang
Fortran support
BR2_TOOLCHAIN_HAS_FORTRAN
fortran
thread support
BR2_TOOLCHAIN_HAS_THREADS
threads
(unless BR2_TOOLCHAIN_HAS_THREADS_NPTL
is also needed, in which case, specifying only NPTL
is sufficient)
NPTL thread support
BR2_TOOLCHAIN_HAS_THREADS_NPTL
NPTL
RPC support
BR2_TOOLCHAIN_HAS_NATIVE_RPC
RPC
wchar support
BR2_USE_WCHAR
wchar
dynamic library
!BR2_STATIC_LIBS
dynamic library
Some packages need a Linux kernel to be built by buildroot. These are typically kernel modules or firmware. A comment should be added in the Config.in file to express this dependency, similar to dependencies on toolchain options. The general format is:
foo needs a Linux kernel to be built
If there is a dependency on both toolchain options and the Linux kernel, use this format:
foo needs a toolchain w/ featA, featB, featC and a Linux kernel to be built
If a package needs udev /dev management, it should depend on symbol
BR2_PACKAGE_HAS_UDEV
, and the following comment should be added:
foo needs udev /dev management
If there is a dependency on both toolchain options and udev /dev management, use this format:
foo needs udev /dev management and a toolchain w/ featA, featB, featC
Some features can be provided by more than one package, such as the openGL libraries.
See Section 18.12, “Infrastructure for virtual packages” for more on the virtual packages.
Finally, here’s the hardest part. Create a file named libfoo.mk
. It
describes how the package should be downloaded, configured, built,
installed, etc.
Depending on the package type, the .mk
file must be written in a
different way, using different infrastructures:
flit
, pep517
, setuptools
,
setuptools-rust
or maturin
mechanisms. We cover them through a
tutorial and a
reference.
Further formatting details: see the writing rules.
When possible, you must add a third file, named libfoo.hash
, that
contains the hashes of the downloaded files for the libfoo
package. The only reason for not adding a .hash
file is when hash
checking is not possible due to how the package is downloaded.
When a package has a version selection choice, then the hash file may be
stored in a subdirectory named after the version, e.g.
package/libfoo/1.2.3/libfoo.hash
. This is especially important if the
different versions have different licensing terms, but they are stored
in the same file. Otherwise, the hash file should stay in the package’s
directory.
The hashes stored in that file are used to validate the integrity of the downloaded files and of the license files.
The format of this file is one line for each file for which to check the hash, each line with the following three fields separated by two spaces:
the type of hash, one of:
md5
, sha1
, sha224
, sha256
, sha384
, sha512
the hash of the file:
md5
, 32 hexadecimal characters
sha1
, 40 hexadecimal characters
sha224
, 56 hexadecimal characters
sha256
, 64 hexadecimal characters
sha384
, 96 hexadecimal characters
sha512
, 128 hexadecimal characters
the name of the file:
FOO_LICENSE_FILES
.
Lines starting with a #
sign are considered comments, and ignored. Empty
lines are ignored.
There can be more than one hash for a single file, each on its own line. In this case, all hashes must match.
Note. Ideally, the hashes stored in this file should match the hashes published by
upstream, e.g. on their website, in the e-mail announcement… If upstream
provides more than one type of hash (e.g. sha1
and sha512
), then it is
best to add all those hashes in the .hash
file. If upstream does not
provide any hash, or only provides an md5
hash, then compute at least one
strong hash yourself (preferably sha256
, but not md5
), and mention
this in a comment line above the hashes.
Note. The hashes for license files are used to detect a license change when a
package version is bumped. The hashes are checked during the make legal-info
target run. For a package with multiple versions (like Qt5),
create the hash file in a subdirectory <packageversion>
of that package
(see also Section 19.2, “How patches are applied”).
The example below defines a sha1
and a sha256
published by upstream for
the main libfoo-1.2.3.tar.bz2
tarball, an md5
from upstream and a
locally-computed sha256
hashes for a binary blob, a sha256
for a
downloaded patch, and an archive with no hash:
# Hashes from: http://www.foosoftware.org/download/libfoo-1.2.3.tar.bz2.{sha1,sha256}: sha1 486fb55c3efa71148fe07895fd713ea3a5ae343a libfoo-1.2.3.tar.bz2 sha256 efc8103cc3bcb06bda6a781532d12701eb081ad83e8f90004b39ab81b65d4369 libfoo-1.2.3.tar.bz2 # md5 from: http://www.foosoftware.org/download/libfoo-1.2.3.tar.bz2.md5, sha256 locally computed: md5 2d608f3c318c6b7557d551a5a09314f03452f1a1 libfoo-data.bin sha256 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b libfoo-data.bin # Locally computed: sha256 ff52101fb90bbfc3fe9475e425688c660f46216d7e751c4bbdb1dc85cdccacb9 libfoo-fix-blabla.patch # Hash for license files: sha256 a45a845012742796534f7e91fe623262ccfb99460a2bd04015bd28d66fba95b8 COPYING sha256 01b1f9f2c8ee648a7a596a1abe8aa4ed7899b1c9e5551bda06da6e422b04aa55 doc/COPYING.LGPL
If the .hash
file is present, and it contains one or more hashes for a
downloaded file, the hash(es) computed by Buildroot (after download) must
match the hash(es) stored in the .hash
file. If one or more hashes do
not match, Buildroot considers this an error, deletes the downloaded file,
and aborts.
If the .hash
file is present, but it does not contain a hash for a
downloaded file, Buildroot considers this an error and aborts. However,
the downloaded file is left in the download directory since this
typically indicates that the .hash
file is wrong but the downloaded
file is probably OK.
Hashes are currently checked for files fetched from http/ftp servers, Git or subversion repositories, files copied using scp and local files. Hashes are not checked for other version control systems (such as CVS, mercurial) because Buildroot currently does not generate reproducible tarballs when source code is fetched from such version control systems.
Additionally, for packages for which it is possible to specify a custom version (e.g. a custom version string, a remote tarball URL, or a VCS repository location and changeset), Buildroot can’t carry hashes for those. It is however possible to provide a list of extra hashes that can cover such cases.
Hashes should only be added in .hash
files for files that are
guaranteed to be stable. For example, patches auto-generated by Github
are not guaranteed to be stable, and therefore their hashes can change
over time. Such patches should not be downloaded, and instead be added
locally to the package folder.
If the .hash
file is missing, then no check is done at all.
Packages that provide a system daemon usually need to be started somehow at boot. Buildroot comes with support for several init systems, some are considered tier one (see Section 6.3, “init system”), while others are also available but do not have the same level of integration. Ideally, all packages providing a system daemon should provide a start script for BusyBox/SysV init and a systemd unit file.
For consistency, the start script must follow the style and composition
as shown in the reference: package/busybox/S01syslogd
. An annotated
example of this style is shown below. There is no specific coding style
for systemd unit files, but if a package comes with its own unit file,
that is preferred over a buildroot specific one, if it is compatible
with buildroot.
The name of the start script is composed of the SNN
and the daemon
name. The NN
is the start order number which needs to be carefully
chosen. For example, a program that requires networking to be up should
not start before S40network
. The scripts are started in alphabetical
order, so S01syslogd
starts before S01watchdogd
, and S02sysctl
start thereafter.
#!/bin/sh DAEMON="syslogd" PIDFILE="/var/run/$DAEMON.pid" SYSLOGD_ARGS="" # shellcheck source=/dev/null [ -r "/etc/default/$DAEMON" ] && . "/etc/default/$DAEMON" # BusyBox' syslogd does not create a pidfile, so pass "-n" in the command line # and use "--make-pidfile" to instruct start-stop-daemon to create one. start() { printf 'Starting %s: ' "$DAEMON" # shellcheck disable=SC2086 # we need the word splitting start-stop-daemon --start --background --make-pidfile \ --pidfile "$PIDFILE" --exec "/sbin/$DAEMON" \ -- -n $SYSLOGD_ARGS status=$? if [ "$status" -eq 0 ]; then echo "OK" else echo "FAIL" fi return "$status" } stop() { printf 'Stopping %s: ' "$DAEMON" start-stop-daemon --stop --pidfile "$PIDFILE" --exec "/sbin/$DAEMON" status=$? if [ "$status" -eq 0 ]; then echo "OK" else echo "FAIL" return "$status" fi while start-stop-daemon --stop --test --quiet --pidfile "$PIDFILE" \ --exec "/sbin/$DAEMON"; do sleep 0.1 done rm -f "$PIDFILE" return "$status" } restart() { stop start } case "$1" in start|stop|restart) "$1";; reload) # Restart, since there is no true "reload" feature. restart;; *) echo "Usage: $0 {start|stop|restart|reload}" exit 1 esac
Note: programs that support reloading their configuration in some
fashion (SIGHUP
) should provide a reload()
function similar to
stop()
. The start-stop-daemon
command supports --stop --signal
HUP
for this. It is recommended to always append --exec
"/sbin/$DAEMON"
to all start-stop-daemon
commands to ensure signals
are set to a PID that matches $DAEMON
.
Both start scripts and unit files can source command line arguments from
/etc/default/foo
, in general, if such a file does not exist it should
not block the start of the daemon, unless there is some site specirfic
command line argument the daemon requires to start. For start scripts a
FOO_ARGS="-s -o -m -e -args"
can be defined to a default value in and
the user can override this from /etc/default/foo
.
By packages with specific build systems we mean all the packages whose build system is not one of the standard ones, such as autotools or CMake. This typically includes packages whose build system is based on hand-written Makefiles or shell scripts.
01: ################################################################################ 02: # 03: # libfoo 04: # 05: ################################################################################ 06: 07: LIBFOO_VERSION = 1.0 08: LIBFOO_SOURCE = libfoo-$(LIBFOO_VERSION).tar.gz 09: LIBFOO_SITE = http://www.foosoftware.org/download 10: LIBFOO_LICENSE = GPL-3.0+ 11: LIBFOO_LICENSE_FILES = COPYING 12: LIBFOO_INSTALL_STAGING = YES 13: LIBFOO_CONFIG_SCRIPTS = libfoo-config 14: LIBFOO_DEPENDENCIES = host-libaaa libbbb 15: 16: define LIBFOO_BUILD_CMDS 17: $(MAKE) $(TARGET_CONFIGURE_OPTS) -C $(@D) all 18: endef 19: 20: define LIBFOO_INSTALL_STAGING_CMDS 21: $(INSTALL) -D -m 0755 $(@D)/libfoo.a $(STAGING_DIR)/usr/lib/libfoo.a 22: $(INSTALL) -D -m 0644 $(@D)/foo.h $(STAGING_DIR)/usr/include/foo.h 23: $(INSTALL) -D -m 0755 $(@D)/libfoo.so* $(STAGING_DIR)/usr/lib 24: endef 25: 26: define LIBFOO_INSTALL_TARGET_CMDS 27: $(INSTALL) -D -m 0755 $(@D)/libfoo.so* $(TARGET_DIR)/usr/lib 28: $(INSTALL) -d -m 0755 $(TARGET_DIR)/etc/foo.d 29: endef 30: 31: define LIBFOO_USERS 32: foo -1 libfoo -1 * - - - LibFoo daemon 33: endef 34: 35: define LIBFOO_DEVICES 36: /dev/foo c 666 0 0 42 0 - - - 37: endef 38: 39: define LIBFOO_PERMISSIONS 40: /bin/foo f 4755 foo libfoo - - - - - 41: endef 42: 43: $(eval $(generic-package))
The Makefile begins on line 7 to 11 with metadata information: the
version of the package (LIBFOO_VERSION
), the name of the
tarball containing the package (LIBFOO_SOURCE
) (xz-ed tarball recommended)
the Internet location at which the tarball can be downloaded from
(LIBFOO_SITE
), the license (LIBFOO_LICENSE
) and file with the
license text (LIBFOO_LICENSE_FILES
). All variables must start with
the same prefix, LIBFOO_
in this case. This prefix is always the
uppercased version of the package name (see below to understand where
the package name is defined).
On line 12, we specify that this package wants to install something to
the staging space. This is often needed for libraries, since they must
install header files and other development files in the staging space.
This will ensure that the commands listed in the
LIBFOO_INSTALL_STAGING_CMDS
variable will be executed.
On line 13, we specify that there is some fixing to be done to some
of the libfoo-config files that were installed during
LIBFOO_INSTALL_STAGING_CMDS
phase.
These *-config files are executable shell script files that are
located in $(STAGING_DIR)/usr/bin directory and are executed
by other 3rd party packages to find out the location and the linking
flags of this particular package.
The problem is that all these *-config files by default give wrong, host system linking flags that are unsuitable for cross-compiling.
For example: -I/usr/include instead of -I$(STAGING_DIR)/usr/include or: -L/usr/lib instead of -L$(STAGING_DIR)/usr/lib
So some sed magic is done to these scripts to make them give correct
flags.
The argument to be given to LIBFOO_CONFIG_SCRIPTS
is the file name(s)
of the shell script(s) needing fixing. All these names are relative to
$(STAGING_DIR)/usr/bin and if needed multiple names can be given.
In addition, the scripts listed in LIBFOO_CONFIG_SCRIPTS
are removed
from $(TARGET_DIR)/usr/bin
, since they are not needed on the target.
Example 18.1. Config script: divine package
Package divine installs shell script $(STAGING_DIR)/usr/bin/divine-config.
So its fixup would be:
DIVINE_CONFIG_SCRIPTS = divine-config
Example 18.2. Config script: imagemagick package:
Package imagemagick installs the following scripts: $(STAGING_DIR)/usr/bin/{Magick,Magick++,MagickCore,MagickWand,Wand}-config
So it’s fixup would be:
IMAGEMAGICK_CONFIG_SCRIPTS = \ Magick-config Magick++-config \ MagickCore-config MagickWand-config Wand-config
On line 14, we specify the list of dependencies this package relies
on. These dependencies are listed in terms of lower-case package names,
which can be packages for the target (without the host-
prefix) or packages for the host (with the host-
) prefix).
Buildroot will ensure that all these packages are built and installed
before the current package starts its configuration.
The rest of the Makefile, lines 16..29, defines what should be done
at the different steps of the package configuration, compilation and
installation.
LIBFOO_BUILD_CMDS
tells what steps should be performed to
build the package. LIBFOO_INSTALL_STAGING_CMDS
tells what
steps should be performed to install the package in the staging space.
LIBFOO_INSTALL_TARGET_CMDS
tells what steps should be
performed to install the package in the target space.
All these steps rely on the $(@D)
variable, which
contains the directory where the source code of the package has been
extracted.
On lines 31..33, we define a user that is used by this package (e.g.
to run a daemon as non-root) (LIBFOO_USERS
).
On line 35..37, we define a device-node file used by this package
(LIBFOO_DEVICES
).
On line 39..41, we define the permissions to set to specific files
installed by this package (LIBFOO_PERMISSIONS
).
Finally, on line 43, we call the generic-package
function, which
generates, according to the variables defined previously, all the
Makefile code necessary to make your package working.
There are two variants of the generic target. The generic-package
macro is
used for packages to be cross-compiled for the target. The
host-generic-package
macro is used for host packages, natively compiled
for the host. It is possible to call both of them in a single .mk
file: once to create the rules to generate a target
package and once to create the rules to generate a host package:
$(eval $(generic-package)) $(eval $(host-generic-package))
This might be useful if the compilation of the target package requires
some tools to be installed on the host. If the package name is
libfoo
, then the name of the package for the target is also
libfoo
, while the name of the package for the host is
host-libfoo
. These names should be used in the DEPENDENCIES
variables of other packages, if they depend on libfoo
or
host-libfoo
.
The call to the generic-package
and/or host-generic-package
macro
must be at the end of the .mk
file, after all variable definitions.
The call to host-generic-package
must be after the call to
generic-package
, if any.
For the target package, the generic-package
uses the variables defined by
the .mk file and prefixed by the uppercased package name:
LIBFOO_*
. host-generic-package
uses the HOST_LIBFOO_*
variables. For
some variables, if the HOST_LIBFOO_
prefixed variable doesn’t
exist, the package infrastructure uses the corresponding variable
prefixed by LIBFOO_
. This is done for variables that are likely to
have the same value for both the target and host packages. See below
for details.
The list of variables that can be set in a .mk
file to give metadata
information is (assuming the package name is libfoo
) :
LIBFOO_VERSION
, mandatory, must contain the version of the
package. Note that if HOST_LIBFOO_VERSION
doesn’t exist, it is
assumed to be the same as LIBFOO_VERSION
. It can also be a
revision number or a tag for packages that are fetched directly
from their version control system. Examples:
LIBFOO_VERSION = 0.1.2
LIBFOO_VERSION = cb9d6aa9429e838f0e54faa3d455bcbab5eef057
a tag for a git tree LIBFOO_VERSION = v0.1.2
Note: Using a branch name as FOO_VERSION
is not supported, because it does
not and can not work as people would expect it should:
LIBFOO_SOURCE
may contain the name of the tarball of the package,
which Buildroot will use to download the tarball from
LIBFOO_SITE
. If HOST_LIBFOO_SOURCE
is not specified, it defaults
to LIBFOO_SOURCE
. If none are specified, then the value is assumed
to be libfoo-$(LIBFOO_VERSION).tar.gz
.
Example: LIBFOO_SOURCE = foobar-$(LIBFOO_VERSION).tar.bz2
LIBFOO_PATCH
may contain a space-separated list of patch file
names, that Buildroot will download and apply to the package source
code. If an entry contains ://
, then Buildroot will assume it is a
full URL and download the patch from this location. Otherwise,
Buildroot will assume that the patch should be downloaded from
LIBFOO_SITE
. If HOST_LIBFOO_PATCH
is not specified, it defaults
to LIBFOO_PATCH
. Note that patches that are included in Buildroot
itself use a different mechanism: all files of the form
*.patch
present in the package directory inside
Buildroot will be applied to the package after extraction (see
patching a package). Finally, patches listed in
the LIBFOO_PATCH
variable are applied before the patches stored
in the Buildroot package directory.
LIBFOO_SITE
provides the location of the package, which can be a
URL or a local filesystem path. HTTP, FTP and SCP are supported URL
types for retrieving package tarballs. In these cases don’t include a
trailing slash: it will be added by Buildroot between the directory
and the filename as appropriate. Git, Subversion, Mercurial,
and Bazaar are supported URL types for retrieving packages directly
from source code management systems. There is a helper function to make
it easier to download source tarballs from GitHub (refer to
Section 18.25.4, “How to add a package from GitHub” for details). A filesystem path may be used
to specify either a tarball or a directory containing the package
source code. See LIBFOO_SITE_METHOD
below for more details on how
retrieval works.
Note that SCP URLs should be of the form
scp://[user@]host:filepath
, and that filepath is relative to the
user’s home directory, so you may want to prepend the path with a
slash for absolute paths:
scp://[user@]host:/absolutepath
. The same goes for SFTP URLs.
If HOST_LIBFOO_SITE
is not specified, it defaults to
LIBFOO_SITE
.
Examples:
LIBFOO_SITE=http://www.libfoosoftware.org/libfoo
LIBFOO_SITE=http://svn.xiph.org/trunk/Tremor
LIBFOO_SITE=/opt/software/libfoo.tar.gz
LIBFOO_SITE=$(TOPDIR)/../src/libfoo
LIBFOO_DL_OPTS
is a space-separated list of additional options to
pass to the downloader. Useful for retrieving documents with
server-side checking for user logins and passwords, or to use a proxy.
All download methods valid for LIBFOO_SITE_METHOD
are supported;
valid options depend on the download method (consult the man page
for the respective download utilities).
LIBFOO_EXTRA_DOWNLOADS
is a space-separated list of additional
files that Buildroot should download. If an entry contains ://
then Buildroot will assume it is a complete URL and will download
the file using this URL. Otherwise, Buildroot will assume the file
to be downloaded is located at LIBFOO_SITE
. Buildroot will not do
anything with those additional files, except download them: it will
be up to the package recipe to use them from $(LIBFOO_DL_DIR)
.
LIBFOO_SITE_METHOD
determines the method used to fetch or copy the
package source code. In many cases, Buildroot guesses the method
from the contents of LIBFOO_SITE
and setting LIBFOO_SITE_METHOD
is unnecessary. When HOST_LIBFOO_SITE_METHOD
is not specified, it
defaults to the value of LIBFOO_SITE_METHOD
.
The possible values of LIBFOO_SITE_METHOD
are:
wget
for normal FTP/HTTP downloads of tarballs. Used by
default when LIBFOO_SITE
begins with http://
, https://
or
ftp://
.
scp
for downloads of tarballs over SSH with scp. Used by
default when LIBFOO_SITE
begins with scp://
.
sftp
for downloads of tarballs over SSH with sftp. Used by
default when LIBFOO_SITE
begins with sftp://
.
svn
for retrieving source code from a Subversion repository.
Used by default when LIBFOO_SITE
begins with svn://
. When a
http://
Subversion repository URL is specified in
LIBFOO_SITE
, one must specify LIBFOO_SITE_METHOD=svn
.
Buildroot performs a checkout which is preserved as a tarball in
the download cache; subsequent builds use the tarball instead of
performing another checkout.
cvs
for retrieving source code from a CVS repository.
Used by default when LIBFOO_SITE
begins with cvs://
.
The downloaded source code is cached as with the svn
method.
Anonymous pserver mode is assumed otherwise explicitly defined
on LIBFOO_SITE
. Both
LIBFOO_SITE=cvs://libfoo.net:/cvsroot/libfoo
and
LIBFOO_SITE=cvs://:ext:libfoo.net:/cvsroot/libfoo
are accepted, on the former anonymous pserver access mode is
assumed.
LIBFOO_SITE
must contain the source URL as well as the remote
repository directory. The module is the package name.
LIBFOO_VERSION
is mandatory and must be a tag, a branch, or
a date (e.g. "2014-10-20", "2014-10-20 13:45", "2014-10-20
13:45+01" see "man cvs" for further details).
git
for retrieving source code from a Git repository. Used by
default when LIBFOO_SITE
begins with git://
. The downloaded
source code is cached as with the svn
method.
hg
for retrieving source code from a Mercurial repository. One
must specify LIBFOO_SITE_METHOD=hg
when LIBFOO_SITE
contains a Mercurial repository URL. The downloaded source code
is cached as with the svn
method.
bzr
for retrieving source code from a Bazaar repository. Used
by default when LIBFOO_SITE
begins with bzr://
. The
downloaded source code is cached as with the svn
method.
file
for a local tarball. One should use this when
LIBFOO_SITE
specifies a package tarball as a local filename.
Useful for software that isn’t available publicly or in version
control.
local
for a local source code directory. One should use this
when LIBFOO_SITE
specifies a local directory path containing
the package source code. Buildroot copies the contents of the
source directory into the package’s build directory. Note that
for local
packages, no patches are applied. If you need to
still patch the source code, use LIBFOO_POST_RSYNC_HOOKS
, see
Section 18.23.1, “Using the POST_RSYNC
hook”.
LIBFOO_GIT_SUBMODULES
can be set to YES
to create an archive
with the git submodules in the repository. This is only available
for packages downloaded with git (i.e. when
LIBFOO_SITE_METHOD=git
). Note that we try not to use such git
submodules when they contain bundled libraries, in which case we
prefer to use those libraries from their own package.
LIBFOO_GIT_LFS
should be set to YES
if the Git repository uses
Git LFS to store large files out of band. This is only available for
packages downloaded with git (i.e. when LIBFOO_SITE_METHOD=git
).
LIBFOO_SVN_EXTERNALS
can be set to YES
to create an archive with
the svn external references. This is only available for packages
downloaded with subversion.
LIBFOO_STRIP_COMPONENTS
is the number of leading components
(directories) that tar must strip from file names on extraction.
The tarball for most packages has one leading component named
"<pkg-name>-<pkg-version>", thus Buildroot passes
--strip-components=1 to tar to remove it.
For non-standard packages that don’t have this component, or
that have more than one leading component to strip, set this
variable with the value to be passed to tar. Default: 1.
LIBFOO_EXCLUDES
is a space-separated list of patterns to exclude
when extracting the archive. Each item from that list is passed as
a tar’s --exclude
option. By default, empty.
LIBFOO_DEPENDENCIES
lists the dependencies (in terms of package
name) that are required for the current target package to
compile. These dependencies are guaranteed to be compiled and
installed before the configuration of the current package starts.
However, modifications to configuration of these dependencies will
not force a rebuild of the current package. In a similar way,
HOST_LIBFOO_DEPENDENCIES
lists the dependencies for the current
host package.
LIBFOO_EXTRACT_DEPENDENCIES
lists the dependencies (in terms of
package name) that are required for the current target package to be
extracted. These dependencies are guaranteed to be compiled and
installed before the extract step of the current package
starts. This is only used internally by the package infrastructure,
and should typically not be used directly by packages.
LIBFOO_PATCH_DEPENDENCIES
lists the dependencies (in terms of
package name) that are required for the current package to be
patched. These dependencies are guaranteed to be extracted and
patched (but not necessarily built) before the current package is
patched. In a similar way, HOST_LIBFOO_PATCH_DEPENDENCIES
lists
the dependencies for the current host package.
This is seldom used; usually, LIBFOO_DEPENDENCIES
is what you
really want to use.
LIBFOO_PROVIDES
lists all the virtual packages libfoo
is an
implementation of. See Section 18.12, “Infrastructure for virtual packages”.
LIBFOO_INSTALL_STAGING
can be set to YES
or NO
(default). If
set to YES
, then the commands in the LIBFOO_INSTALL_STAGING_CMDS
variables are executed to install the package into the staging
directory.
LIBFOO_INSTALL_TARGET
can be set to YES
(default) or NO
. If
set to YES
, then the commands in the LIBFOO_INSTALL_TARGET_CMDS
variables are executed to install the package into the target
directory.
LIBFOO_INSTALL_IMAGES
can be set to YES
or NO
(default). If
set to YES
, then the commands in the LIBFOO_INSTALL_IMAGES_CMDS
variable are executed to install the package into the images
directory.
LIBFOO_CONFIG_SCRIPTS
lists the names of the files in
$(STAGING_DIR)/usr/bin that need some special fixing to make them
cross-compiling friendly. Multiple file names separated by space can
be given and all are relative to $(STAGING_DIR)/usr/bin. The files
listed in LIBFOO_CONFIG_SCRIPTS
are also removed from
$(TARGET_DIR)/usr/bin
since they are not needed on the target.
LIBFOO_DEVICES
lists the device files to be created by Buildroot
when using the static device table. The syntax to use is the
makedevs one. You can find some documentation for this syntax in the
Chapter 25, Makedev syntax documentation. This variable is optional.
LIBFOO_PERMISSIONS
lists the changes of permissions to be done at
the end of the build process. The syntax is once again the makedevs one.
You can find some documentation for this syntax in the Chapter 25, Makedev syntax documentation.
This variable is optional.
LIBFOO_USERS
lists the users to create for this package, if it installs
a program you want to run as a specific user (e.g. as a daemon, or as a
cron-job). The syntax is similar in spirit to the makedevs one, and is
described in the Chapter 26, Makeusers syntax documentation. This variable is optional.
LIBFOO_LICENSE
defines the license (or licenses) under which the package
is released.
This name will appear in the manifest file produced by make legal-info
.
If the license appears in the SPDX License List,
use the SPDX short identifier to make the manifest file uniform.
Otherwise, describe the license in a precise and concise way, avoiding
ambiguous names such as BSD
which actually name a family of licenses.
This variable is optional. If it is not defined, unknown
will appear in
the license
field of the manifest file for this package.
The expected format for this variable must comply with the following rules:
comma
separate licenses (e.g. LIBFOO_LICENSE =
GPL-2.0+, LGPL-2.1+
). If there is clear distinction between which
component is licensed under what license, then annotate the license
with that component, between parenthesis (e.g. LIBFOO_LICENSE =
GPL-2.0+ (programs), LGPL-2.1+ (libraries)
).
FOO_LICENSE += , GPL-2.0+
(programs)
); the infrastructure will internally remove the space before
the comma.
or
keyword (e.g. LIBFOO_LICENSE = AFL-2.1 or GPL-2.0+
).
LIBFOO_LICENSE_FILES
is a space-separated list of files in the package
tarball that contain the license(s) under which the package is released.
make legal-info
copies all of these files in the legal-info
directory.
See Chapter 13, Legal notice and licensing for more information.
This variable is optional. If it is not defined, a warning will be produced
to let you know, and not saved
will appear in the license files
field
of the manifest file for this package.
LIBFOO_ACTUAL_SOURCE_TARBALL
only applies to packages whose
LIBFOO_SITE
/ LIBFOO_SOURCE
pair points to an archive that does
not actually contain source code, but binary code. This a very
uncommon case, only known to apply to external toolchains which come
already compiled, although theoretically it might apply to other
packages. In such cases a separate tarball is usually available with
the actual source code. Set LIBFOO_ACTUAL_SOURCE_TARBALL
to the
name of the actual source code archive and Buildroot will download
it and use it when you run make legal-info
to collect
legally-relevant material. Note this file will not be downloaded
during regular builds nor by make source
.
LIBFOO_ACTUAL_SOURCE_SITE
provides the location of the actual
source tarball. The default value is LIBFOO_SITE
, so you don’t
need to set this variable if the binary and source archives are
hosted on the same directory. If LIBFOO_ACTUAL_SOURCE_TARBALL
is
not set, it doesn’t make sense to define
LIBFOO_ACTUAL_SOURCE_SITE
.
LIBFOO_REDISTRIBUTE
can be set to YES
(default) or NO
to indicate if
the package source code is allowed to be redistributed. Set it to NO
for
non-opensource packages: Buildroot will not save the source code for this
package when collecting the legal-info
.
LIBFOO_FLAT_STACKSIZE
defines the stack size of an application built into
the FLAT binary format. The application stack size on the NOMMU architecture
processors can’t be enlarged at run time. The default stack size for the
FLAT binary format is only 4k bytes. If the application consumes more stack,
append the required number here.
LIBFOO_BIN_ARCH_EXCLUDE
is a space-separated list of paths (relative
to the target directory) to ignore when checking that the package
installs correctly cross-compiled binaries. You seldom need to set this
variable, unless the package installs binary blobs outside the default
locations, /lib/firmware
, /usr/lib/firmware
, /lib/modules
,
/usr/lib/modules
, and /usr/share
, which are automatically excluded.
LIBFOO_IGNORE_CVES
is a space-separated list of CVEs that tells
Buildroot CVE tracking tools which CVEs should be ignored for this
package. This is typically used when the CVE is fixed by a patch in
the package, or when the CVE for some reason does not affect the
Buildroot package. A Makefile comment must always precede the
addition of a CVE to this variable. Example:
# 0001-fix-cve-2020-12345.patch LIBFOO_IGNORE_CVES += CVE-2020-12345 # only when built with libbaz, which Buildroot doesn't support LIBFOO_IGNORE_CVES += CVE-2020-54321
LIBFOO_CPE_ID_*
variables is a set of variables that allows the
package to define its CPE
identifier. The available variables are:
LIBFOO_CPE_ID_VALID
, if set to YES
, specifies that the default
values for each of the following variables is appropriate, and
generates a valid CPE ID.
LIBFOO_CPE_ID_PREFIX
, specifies the prefix of the CPE identifier,
i.e the first three fields. When not defined, the default value is
cpe:2.3:a
.
LIBFOO_CPE_ID_VENDOR
, specifies the vendor part of the CPE
identifier. When not defined, the default value is
<pkgname>_project
.
LIBFOO_CPE_ID_PRODUCT
, specifies the product part of the CPE
identifier. When not defined, the default value is <pkgname>
.
LIBFOO_CPE_ID_VERSION
, specifies the version part of the CPE
identifier. When not defined the default value is
$(LIBFOO_VERSION)
.
LIBFOO_CPE_ID_UPDATE
specifies the update part of the CPE
identifier. When not defined the default value is *
.
If any of those variables is defined, then the generic package
infrastructure assumes the package provides valid CPE information. In
this case, the generic package infrastructure will define
LIBFOO_CPE_ID
.
For a host package, if its LIBFOO_CPE_ID_*
variables are not
defined, it inherits the value of those variables from the
corresponding target package.
The recommended way to define these variables is to use the following syntax:
LIBFOO_VERSION = 2.32
Now, the variables that define what should be performed at the different steps of the build process.
LIBFOO_EXTRACT_CMDS
lists the actions to be performed to extract
the package. This is generally not needed as tarballs are
automatically handled by Buildroot. However, if the package uses a
non-standard archive format, such as a ZIP or RAR file, or has a
tarball with a non-standard organization, this variable allows to
override the package infrastructure default behavior.
LIBFOO_CONFIGURE_CMDS
lists the actions to be performed to
configure the package before its compilation.
LIBFOO_BUILD_CMDS
lists the actions to be performed to
compile the package.
HOST_LIBFOO_INSTALL_CMDS
lists the actions to be performed
to install the package, when the package is a host package. The
package must install its files to the directory given by
$(HOST_DIR)
. All files, including development files such as
headers should be installed, since other packages might be compiled
on top of this package.
LIBFOO_INSTALL_TARGET_CMDS
lists the actions to be
performed to install the package to the target directory, when the
package is a target package. The package must install its files to
the directory given by $(TARGET_DIR)
. Only the files required for
execution of the package have to be
installed. Header files, static libraries and documentation will be
removed again when the target filesystem is finalized.
LIBFOO_INSTALL_STAGING_CMDS
lists the actions to be
performed to install the package to the staging directory, when the
package is a target package. The package must install its files to
the directory given by $(STAGING_DIR)
. All development files
should be installed, since they might be needed to compile other
packages.
LIBFOO_INSTALL_IMAGES_CMDS
lists the actions to be performed to
install the package to the images directory, when the package is a
target package. The package must install its files to the directory
given by $(BINARIES_DIR)
. Only files that are binary images (aka
images) that do not belong in the TARGET_DIR
but are necessary
for booting the board should be placed here. For example, a package
should utilize this step if it has binaries which would be similar
to the kernel image, bootloader or root filesystem images.
LIBFOO_INSTALL_INIT_SYSV
, LIBFOO_INSTALL_INIT_OPENRC
and
LIBFOO_INSTALL_INIT_SYSTEMD
list the actions to install init
scripts either for the systemV-like init systems (busybox,
sysvinit, etc.), openrc or for the systemd units. These commands
will be run only when the relevant init system is installed (i.e.
if systemd is selected as the init system in the configuration,
only LIBFOO_INSTALL_INIT_SYSTEMD
will be run). The only exception
is when openrc is chosen as init system and LIBFOO_INSTALL_INIT_OPENRC
has not been set, in such situation LIBFOO_INSTALL_INIT_SYSV
will
be called, since openrc supports sysv init scripts.
When systemd is used as the init system, buildroot will automatically enable
all services using the systemctl preset-all
command in the final phase of
image building. You can add preset files to prevent a particular unit from
being automatically enabled by buildroot.
LIBFOO_HELP_CMDS
lists the actions to print the package help, which
is included to the main make help
output. These commands can print
anything in any format.
This is seldom used, as packages rarely have custom rules. Do not use
this variable, unless you really know that you need to print help.
LIBFOO_LINUX_CONFIG_FIXUPS
lists the Linux kernel configuration
options that are needed to build and use this package, and without
which the package is fundamentally broken. This shall be a set of
calls to one of the kconfig tweaking option: KCONFIG_ENABLE_OPT
,
KCONFIG_DISABLE_OPT
, or KCONFIG_SET_OPT
.
This is seldom used, as package usually have no strict requirements on
the kernel options.
The preferred way to define these variables is:
define LIBFOO_CONFIGURE_CMDS action 1 action 2 action 3 endef
In the action definitions, you can use the following variables:
$(LIBFOO_PKGDIR)
contains the path to the directory containing the
libfoo.mk
and Config.in
files. This variable is useful when it is
necessary to install a file bundled in Buildroot, like a runtime
configuration file, a splashscreen image…
$(@D)
, which contains the directory in which the package source
code has been uncompressed.
$(LIBFOO_DL_DIR)
contains the path to the directory where all the downloads
made by Buildroot for libfoo
are stored in.
$(TARGET_CC)
, $(TARGET_LD)
, etc. to get the target
cross-compilation utilities
$(TARGET_CROSS)
to get the cross-compilation toolchain prefix
$(HOST_DIR)
, $(STAGING_DIR)
and $(TARGET_DIR)
variables to install the packages properly. Those variables point to
the global host, staging and target directories, unless
per-package directory support is used, in which case they point to
the current package host, staging and target directories. In
both cases, it doesn’t make any difference from the package point of
view: it should simply use HOST_DIR
, STAGING_DIR
and
TARGET_DIR
. See Section 8.12, “Top-level parallel build” for more details
about per-package directory support.
Finally, you can also use hooks. See Section 18.23, “Hooks available in the various build steps” for more information.
First, let’s see how to write a .mk
file for an autotools-based
package, with an example :
01: ################################################################################ 02: # 03: # libfoo 04: # 05: ################################################################################ 06: 07: LIBFOO_VERSION = 1.0 08: LIBFOO_SOURCE = libfoo-$(LIBFOO_VERSION).tar.gz 09: LIBFOO_SITE = http://www.foosoftware.org/download 10: LIBFOO_INSTALL_STAGING = YES 11: LIBFOO_INSTALL_TARGET = NO 12: LIBFOO_CONF_OPTS = --disable-shared 13: LIBFOO_DEPENDENCIES = libglib2 host-pkgconf 14: 15: $(eval $(autotools-package))
On line 7, we declare the version of the package.
On line 8 and 9, we declare the name of the tarball (xz-ed tarball recommended) and the location of the tarball on the Web. Buildroot will automatically download the tarball from this location.
On line 10, we tell Buildroot to install the package to the staging
directory. The staging directory, located in output/staging/
is the directory where all the packages are installed, including their
development files, etc. By default, packages are not installed to the
staging directory, since usually, only libraries need to be installed in
the staging directory: their development files are needed to compile
other libraries or applications depending on them. Also by default, when
staging installation is enabled, packages are installed in this location
using the make install
command.
On line 11, we tell Buildroot to not install the package to the
target directory. This directory contains what will become the root
filesystem running on the target. For purely static libraries, it is
not necessary to install them in the target directory because they will
not be used at runtime. By default, target installation is enabled; setting
this variable to NO is almost never needed. Also by default, packages are
installed in this location using the make install
command.
On line 12, we tell Buildroot to pass a custom configure option, that
will be passed to the ./configure
script before configuring
and building the package.
On line 13, we declare our dependencies, so that they are built before the build process of our package starts.
Finally, on line line 15, we invoke the autotools-package
macro that generates all the Makefile rules that actually allows the
package to be built.
The main macro of the autotools package infrastructure is
autotools-package
. It is similar to the generic-package
macro. The ability to
have target and host packages is also available, with the
host-autotools-package
macro.
Just like the generic infrastructure, the autotools infrastructure
works by defining a number of variables before calling the
autotools-package
macro.
All the package metadata information variables that exist in the generic package infrastructure also exist in the autotools infrastructure.
A few additional variables, specific to the autotools infrastructure, can also be defined. Many of them are only useful in very specific cases, typical packages will therefore only use a few of them.
LIBFOO_SUBDIR
may contain the name of a subdirectory
inside the package that contains the configure script. This is useful,
if for example, the main configure script is not at the root of the
tree extracted by the tarball. If HOST_LIBFOO_SUBDIR
is
not specified, it defaults to LIBFOO_SUBDIR
.
LIBFOO_CONF_ENV
, to specify additional environment
variables to pass to the configure script. By default, empty.
LIBFOO_CONF_OPTS
, to specify additional configure
options to pass to the configure script. By default, empty.
LIBFOO_MAKE
, to specify an alternate make
command. This is typically useful when parallel make is enabled in
the configuration (using BR2_JLEVEL
) but that this
feature should be disabled for the given package, for one reason or
another. By default, set to $(MAKE)
. If parallel building
is not supported by the package, then it should be set to
LIBFOO_MAKE=$(MAKE1)
.
LIBFOO_MAKE_ENV
, to specify additional environment
variables to pass to make in the build step. These are passed before
the make
command. By default, empty.
LIBFOO_MAKE_OPTS
, to specify additional variables to
pass to make in the build step. These are passed after the
make
command. By default, empty.
LIBFOO_AUTORECONF
, tells whether the package should
be autoreconfigured or not (i.e. if the configure script and
Makefile.in files should be re-generated by re-running autoconf,
automake, libtool, etc.). Valid values are YES
and
NO
. By default, the value is NO
LIBFOO_AUTORECONF_ENV
, to specify additional environment
variables to pass to the autoreconf program if
LIBFOO_AUTORECONF=YES
. These are passed in the environment of
the autoreconf command. By default, empty.
LIBFOO_AUTORECONF_OPTS
to specify additional options
passed to the autoreconf program if
LIBFOO_AUTORECONF=YES
. By default, empty.
LIBFOO_AUTOPOINT
, tells whether the package should be
autopointed or not (i.e. if the package needs I18N infrastructure
copied in.) Only valid when LIBFOO_AUTORECONF=YES
. Valid
values are YES
and NO
. The default is NO
.
LIBFOO_LIBTOOL_PATCH
tells whether the Buildroot
patch to fix libtool cross-compilation issues should be applied or
not. Valid values are YES
and NO
. By
default, the value is YES
LIBFOO_INSTALL_STAGING_OPTS
contains the make options
used to install the package to the staging directory. By default, the
value is DESTDIR=$(STAGING_DIR) install
, which is
correct for most autotools packages. It is still possible to override
it.
LIBFOO_INSTALL_TARGET_OPTS
contains the make options
used to install the package to the target directory. By default, the
value is DESTDIR=$(TARGET_DIR) install
. The default
value is correct for most autotools packages, but it is still possible
to override it if needed.
With the autotools infrastructure, all the steps required to build and install the packages are already defined, and they generally work well for most autotools-based packages. However, when required, it is still possible to customize what is done in any particular step:
.mk
file defines its
own LIBFOO_CONFIGURE_CMDS
variable, it will be used
instead of the default autotools one. However, using this method
should be restricted to very specific cases. Do not use it in the
general case.
First, let’s see how to write a .mk
file for a CMake-based package,
with an example :
01: ################################################################################ 02: # 03: # libfoo 04: # 05: ################################################################################ 06: 07: LIBFOO_VERSION = 1.0 08: LIBFOO_SOURCE = libfoo-$(LIBFOO_VERSION).tar.gz 09: LIBFOO_SITE = http://www.foosoftware.org/download 10: LIBFOO_INSTALL_STAGING = YES 11: LIBFOO_INSTALL_TARGET = NO 12: LIBFOO_CONF_OPTS = -DBUILD_DEMOS=ON 13: LIBFOO_DEPENDENCIES = libglib2 host-pkgconf 14: 15: $(eval $(cmake-package))
On line 7, we declare the version of the package.
On line 8 and 9, we declare the name of the tarball (xz-ed tarball recommended) and the location of the tarball on the Web. Buildroot will automatically download the tarball from this location.
On line 10, we tell Buildroot to install the package to the staging
directory. The staging directory, located in output/staging/
is the directory where all the packages are installed, including their
development files, etc. By default, packages are not installed to the
staging directory, since usually, only libraries need to be installed in
the staging directory: their development files are needed to compile
other libraries or applications depending on them. Also by default, when
staging installation is enabled, packages are installed in this location
using the make install
command.
On line 11, we tell Buildroot to not install the package to the
target directory. This directory contains what will become the root
filesystem running on the target. For purely static libraries, it is
not necessary to install them in the target directory because they will
not be used at runtime. By default, target installation is enabled; setting
this variable to NO is almost never needed. Also by default, packages are
installed in this location using the make install
command.
On line 12, we tell Buildroot to pass custom options to CMake when it is configuring the package.
On line 13, we declare our dependencies, so that they are built before the build process of our package starts.
Finally, on line line 15, we invoke the cmake-package
macro that generates all the Makefile rules that actually allows the
package to be built.
The main macro of the CMake package infrastructure is
cmake-package
. It is similar to the generic-package
macro. The ability to
have target and host packages is also available, with the
host-cmake-package
macro.
Just like the generic infrastructure, the CMake infrastructure works
by defining a number of variables before calling the cmake-package
macro.
All the package metadata information variables that exist in the generic package infrastructure also exist in the CMake infrastructure.
A few additional variables, specific to the CMake infrastructure, can also be defined. Many of them are only useful in very specific cases, typical packages will therefore only use a few of them.
LIBFOO_SUBDIR
may contain the name of a subdirectory inside the
package that contains the main CMakeLists.txt file. This is useful,
if for example, the main CMakeLists.txt file is not at the root of
the tree extracted by the tarball. If HOST_LIBFOO_SUBDIR
is not
specified, it defaults to LIBFOO_SUBDIR
.
LIBFOO_CMAKE_BACKEND
specifies the cmake backend to use, one of
make
(to use the GNU Makefiles generator, the default) or ninja
(to use the Ninja generator).
LIBFOO_CONF_ENV
, to specify additional environment variables to
pass to CMake. By default, empty.
LIBFOO_CONF_OPTS
, to specify additional configure options to pass
to CMake. By default, empty. A number of common CMake options are
set by the cmake-package
infrastructure; so it is normally not
necessary to set them in the package’s *.mk
file unless you want
to override them:
CMAKE_BUILD_TYPE
is driven by BR2_ENABLE_RUNTIME_DEBUG
;
CMAKE_INSTALL_PREFIX
;
BUILD_SHARED_LIBS
is driven by BR2_STATIC_LIBS
;
BUILD_DOC
, BUILD_DOCS
are disabled;
BUILD_EXAMPLE
, BUILD_EXAMPLES
are disabled;
BUILD_TEST
, BUILD_TESTS
, BUILD_TESTING
are disabled.
LIBFOO_BUILD_ENV
and LIBFOO_BUILD_OPTS
to specify additional
environment variables, or command line options, to pass to the backend
at build time.
LIBFOO_SUPPORTS_IN_SOURCE_BUILD = NO
should be set when the package
cannot be built inside the source tree but needs a separate build
directory.
LIBFOO_MAKE
, to specify an alternate make
command. This is
typically useful when parallel make is enabled in the configuration
(using BR2_JLEVEL
) but that this feature should be disabled for
the given package, for one reason or another. By default, set to
$(MAKE)
. If parallel building is not supported by the package,
then it should be set to LIBFOO_MAKE=$(MAKE1)
.
LIBFOO_MAKE_ENV
, to specify additional environment variables to
pass to make in the build step. These are passed before the make
command. By default, empty.
LIBFOO_MAKE_OPTS
, to specify additional variables to pass to make
in the build step. These are passed after the make
command. By
default, empty.
LIBFOO_INSTALL_OPTS
contains the make options used to
install the package to the host directory. By default, the value
is install
, which is correct for most CMake packages. It is still
possible to override it.
LIBFOO_INSTALL_STAGING_OPTS
contains the make options used to
install the package to the staging directory. By default, the value
is DESTDIR=$(STAGING_DIR) install/fast
, which is correct for most
CMake packages. It is still possible to override it.
LIBFOO_INSTALL_TARGET_OPTS
contains the make options used to
install the package to the target directory. By default, the value
is DESTDIR=$(TARGET_DIR) install/fast
. The default value is correct
for most CMake packages, but it is still possible to override it if
needed.
With the CMake infrastructure, all the steps required to build and install the packages are already defined, and they generally work well for most CMake-based packages. However, when required, it is still possible to customize what is done in any particular step:
.mk
file defines its own
LIBFOO_CONFIGURE_CMDS
variable, it will be used instead of the
default CMake one. However, using this method should be restricted
to very specific cases. Do not use it in the general case.
This infrastructure applies to Python packages that use the standard
Python setuptools, pep517, flit or maturin mechanisms as their build
system, generally recognizable by the usage of a setup.py
script or
pyproject.toml
file.
First, let’s see how to write a .mk
file for a Python package,
with an example :
01: ################################################################################ 02: # 03: # python-foo 04: # 05: ################################################################################ 06: 07: PYTHON_FOO_VERSION = 1.0 08: PYTHON_FOO_SOURCE = python-foo-$(PYTHON_FOO_VERSION).tar.xz 09: PYTHON_FOO_SITE = http://www.foosoftware.org/download 10: PYTHON_FOO_LICENSE = BSD-3-Clause 11: PYTHON_FOO_LICENSE_FILES = LICENSE 12: PYTHON_FOO_ENV = SOME_VAR=1 13: PYTHON_FOO_DEPENDENCIES = libmad 14: PYTHON_FOO_SETUP_TYPE = setuptools 15: 16: $(eval $(python-package))
On line 7, we declare the version of the package.
On line 8 and 9, we declare the name of the tarball (xz-ed tarball recommended) and the location of the tarball on the Web. Buildroot will automatically download the tarball from this location.
On line 10 and 11, we give licensing details about the package (its license on line 10, and the file containing the license text on line 11).
On line 12, we tell Buildroot to pass custom options to the Python
setup.py
script when it is configuring the package.
On line 13, we declare our dependencies, so that they are built before the build process of our package starts.
On line 14, we declare the specific Python build system being used. In
this case the setuptools
Python build system is used. The five
supported ones are flit
, pep517
, setuptools
, setuptools-rust
and maturin
.
Finally, on line 16, we invoke the python-package
macro that
generates all the Makefile rules that actually allow the package to be
built.
As a policy, packages that merely provide Python modules should all be
named python-<something>
in Buildroot. Other packages that use the
Python build system, but are not Python modules, can freely choose
their name (existing examples in Buildroot are scons
and
supervisor
).
The main macro of the Python package infrastructure is
python-package
. It is similar to the generic-package
macro. It is
also possible to create Python host packages with the
host-python-package
macro.
Just like the generic infrastructure, the Python infrastructure works
by defining a number of variables before calling the python-package
or host-python-package
macros.
All the package metadata information variables that exist in the generic package infrastructure also exist in the Python infrastructure.
Note that:
python
or host-python
in the
PYTHON_FOO_DEPENDENCIES
variable of a package, since these basic
dependencies are automatically added as needed by the Python
package infrastructure.
host-python-setuptools
to
PYTHON_FOO_DEPENDENCIES
for setuptools-based packages, since it’s
automatically added by the Python infrastructure as needed.
One variable specific to the Python infrastructure is mandatory:
PYTHON_FOO_SETUP_TYPE
, to define which Python build system is used
by the package. The five supported values are flit
, pep517
and
setuptools
, setuptools-rust
and maturin
. If you don’t know
which one is used in your package, look at the setup.py
or
pyproject.toml
file in your package source code, and see whether
it imports things from the flit
module or the setuptools
module. If the package is using a pyproject.toml
file without any
build-system requires and with a local in-tree backend-path one
should use pep517
.
A few additional variables, specific to the Python infrastructure, can optionally be defined, depending on the package’s needs. Many of them are only useful in very specific cases, typical packages will therefore only use a few of them, or none.
PYTHON_FOO_SUBDIR
may contain the name of a subdirectory inside the
package that contains the main setup.py
or pyproject.toml
file.
This is useful, if for example, the main setup.py
or pyproject.toml
file is not at the root of the tree extracted by the tarball. If
HOST_PYTHON_FOO_SUBDIR
is not specified, it defaults to
PYTHON_FOO_SUBDIR
.
PYTHON_FOO_ENV
, to specify additional environment variables to
pass to the Python setup.py
script (for setuptools packages) or
the support/scripts/pyinstaller.py
script (for flit/pep517
packages) for both the build and install steps. Note that the
infrastructure is automatically passing several standard variables,
defined in PKG_PYTHON_SETUPTOOLS_ENV
(for setuptools target
packages), HOST_PKG_PYTHON_SETUPTOOLS_ENV
(for setuptools host
packages), PKG_PYTHON_PEP517_ENV
(for flit/pep517 target packages)
and HOST_PKG_PYTHON_PEP517_ENV
(for flit/pep517 host packages).
PYTHON_FOO_BUILD_OPTS
, to specify additional options to pass to
the Python setup.py
script during the build step, this generally
only makes sense to use for setuptools based packages as flit/pep517
based packages do not pass these options to a setup.py
script but
instead pass them to support/scripts/pyinstaller.py
.
PYTHON_FOO_INSTALL_TARGET_OPTS
, PYTHON_FOO_INSTALL_STAGING_OPTS
,
HOST_PYTHON_FOO_INSTALL_OPTS
to specify additional options to pass
to the Python setup.py
script (for setuptools packages) or
support/scripts/pyinstaller.py
(for flit/pep517 packages) during
the target installation step, the staging installation step or the
host installation, respectively.
With the Python infrastructure, all the steps required to build and install the packages are already defined, and they generally work well for most Python-based packages. However, when required, it is still possible to customize what is done in any particular step:
.mk
file defines its own
PYTHON_FOO_BUILD_CMDS
variable, it will be used instead of the
default Python one. However, using this method should be restricted
to very specific cases. Do not use it in the general case.
If the Python package for which you would like to create a Buildroot
package is available on PyPI, you may want to use the scanpypi
tool
located in utils/
to automate the process.
You can find the list of existing PyPI packages here.
scanpypi
requires Python’s setuptools
package to be installed on
your host.
When at the root of your buildroot directory just do :
utils/scanpypi foo bar -o package
This will generate packages python-foo
and python-bar
in the package
folder if they exist on https://pypi.python.org.
Find the external python modules
menu and insert your package inside.
Keep in mind that the items inside a menu should be in alphabetical order.
Please keep in mind that you’ll most likely have to manually check the
package for any mistakes as there are things that cannot be guessed by
the generator (e.g. dependencies on any of the python core modules
such as BR2_PACKAGE_PYTHON_ZLIB). Also, please take note that the
license and license files are guessed and must be checked. You also
need to manually add the package to the package/Config.in
file.
If your Buildroot package is not in the official Buildroot tree but in a br2-external tree, use the -o flag as follows:
utils/scanpypi foo bar -o other_package_dir
This will generate packages python-foo
and python-bar
in the
other_package_directory
instead of package
.
Option -h
will list the available options:
utils/scanpypi -h
C Foreign Function Interface for Python (CFFI) provides a convenient
and reliable way to call compiled C code from Python using interface
declarations written in C. Python packages relying on this backend can
be identified by the appearance of a cffi
dependency in the
install_requires
field of their setup.py
file.
Such a package should:
python-cffi
as a runtime dependency in order to install the
compiled C library wrapper on the target. This is achieved by adding
select BR2_PACKAGE_PYTHON_CFFI
to the package Config.in
.
config BR2_PACKAGE_PYTHON_FOO bool "python-foo" select BR2_PACKAGE_PYTHON_CFFI # runtime
host-python-cffi
as a build-time dependency in order to
cross-compile the C wrapper. This is achieved by adding
host-python-cffi
to the PYTHON_FOO_DEPENDENCIES
variable.
################################################################################ # # python-foo # ################################################################################ ... PYTHON_FOO_DEPENDENCIES = host-python-cffi $(eval $(python-package))
First, let’s see how to write a .mk
file for a LuaRocks-based package,
with an example :
01: ################################################################################ 02: # 03: # lua-foo 04: # 05: ################################################################################ 06: 07: LUA_FOO_VERSION = 1.0.2-1 08: LUA_FOO_NAME_UPSTREAM = foo 09: LUA_FOO_DEPENDENCIES = bar 10: 11: LUA_FOO_BUILD_OPTS += BAR_INCDIR=$(STAGING_DIR)/usr/include 12: LUA_FOO_BUILD_OPTS += BAR_LIBDIR=$(STAGING_DIR)/usr/lib 13: LUA_FOO_LICENSE = luaFoo license 14: LUA_FOO_LICENSE_FILES = $(LUA_FOO_SUBDIR)/COPYING 15: 16: $(eval $(luarocks-package))
On line 7, we declare the version of the package (the same as in the rockspec, which is the concatenation of the upstream version and the rockspec revision, separated by a hyphen -).
On line 8, we declare that the package is called "foo" on LuaRocks. In
Buildroot, we give Lua-related packages a name that starts with "lua", so the
Buildroot name is different from the upstream name. LUA_FOO_NAME_UPSTREAM
makes the link between the two names.
On line 9, we declare our dependencies against native libraries, so that they are built before the build process of our package starts.
On lines 11-12, we tell Buildroot to pass custom options to LuaRocks when it is building the package.
On lines 13-14, we specify the licensing terms for the package.
Finally, on line 16, we invoke the luarocks-package
macro that generates all the Makefile rules that actually allows the
package to be built.
Most of these details can be retrieved from the rock
and rockspec
.
So, this file and the Config.in file can be generated by running the
command luarocks buildroot foo lua-foo
in the Buildroot
directory. This command runs a specific Buildroot addon of luarocks
that will automatically generate a Buildroot package. The result must
still be manually inspected and possibly modified.
package/Config.in
file has to be updated manually to include the
generated Config.in files.
LuaRocks is a deployment and management system for Lua modules, and supports
various build.type
: builtin
, make
and cmake
. In the context of
Buildroot, the luarocks-package
infrastructure only supports the builtin
mode. LuaRocks packages that use the make
or cmake
build mechanisms
should instead be packaged using the generic-package
and cmake-package
infrastructures in Buildroot, respectively.
The main macro of the LuaRocks package infrastructure is luarocks-package
:
like generic-package
it works by defining a number of variables providing
metadata information about the package, and then calling the luarocks-package
macro.
Just like the generic infrastructure, the LuaRocks infrastructure works
by defining a number of variables before calling the luarocks-package
macro.
All the package metadata information variables that exist in the generic package infrastructure also exist in the LuaRocks infrastructure.
Two of them are populated by the LuaRocks infrastructure (for the
download
step). If your package is not hosted on the LuaRocks mirror
$(BR2_LUAROCKS_MIRROR)
, you can override them:
LUA_FOO_SITE
, which defaults to $(BR2_LUAROCKS_MIRROR)
LUA_FOO_SOURCE
, which defaults to
$(lowercase LUA_FOO_NAME_UPSTREAM)-$(LUA_FOO_VERSION).src.rock
A few additional variables, specific to the LuaRocks infrastructure, are also defined. They can be overridden in specific cases.
LUA_FOO_NAME_UPSTREAM
, which defaults to lua-foo
, i.e. the Buildroot
package name
LUA_FOO_ROCKSPEC
, which defaults to
$(lowercase LUA_FOO_NAME_UPSTREAM)-$(LUA_FOO_VERSION).rockspec
LUA_FOO_SUBDIR
, which defaults to
$(LUA_FOO_NAME_UPSTREAM)-$(LUA_FOO_VERSION_WITHOUT_ROCKSPEC_REVISION)
LUA_FOO_BUILD_OPTS
contains additional build options for the
luarocks build
call.
First, let’s see how to write a .mk
file for a Perl/CPAN package,
with an example :
01: ################################################################################ 02: # 03: # perl-foo-bar 04: # 05: ################################################################################ 06: 07: PERL_FOO_BAR_VERSION = 0.02 08: PERL_FOO_BAR_SOURCE = Foo-Bar-$(PERL_FOO_BAR_VERSION).tar.gz 09: PERL_FOO_BAR_SITE = $(BR2_CPAN_MIRROR)/authors/id/M/MO/MONGER 10: PERL_FOO_BAR_DEPENDENCIES = perl-strictures 11: PERL_FOO_BAR_LICENSE = Artistic or GPL-1.0+ 12: PERL_FOO_BAR_LICENSE_FILES = LICENSE 13: PERL_FOO_BAR_DISTNAME = Foo-Bar 14: 15: $(eval $(perl-package))
On line 7, we declare the version of the package.
On line 8 and 9, we declare the name of the tarball and the location of the tarball on a CPAN server. Buildroot will automatically download the tarball from this location.
On line 10, we declare our dependencies, so that they are built before the build process of our package starts.
On line 11 and 12, we give licensing details about the package (its license on line 11, and the file containing the license text on line 12).
On line 13, the name of the distribution as needed by the script
utils/scancpan
(in order to regenerate/upgrade these package files).
Finally, on line 15, we invoke the perl-package
macro that
generates all the Makefile rules that actually allow the package to be
built.
Most of these data can be retrieved from https://metacpan.org/.
So, this file and the Config.in can be generated by running
the script utils/scancpan Foo-Bar
in the Buildroot directory
(or in a br2-external tree).
This script creates a Config.in file and foo-bar.mk file for the
requested package, and also recursively for all dependencies specified by
CPAN. You should still manually edit the result. In particular, the
following things should be checked.
PERL_FOO_BAR_DEPENDENCIES
.
package/Config.in
file has to be updated manually to include the
generated Config.in files. As a hint, the scancpan
script prints out
the required source "…"
statements, sorted alphabetically.
As a policy, packages that provide Perl/CPAN modules should all be
named perl-<something>
in Buildroot.
This infrastructure handles various Perl build systems :
ExtUtils-MakeMaker
(EUMM), Module-Build
(MB) and Module-Build-Tiny
.
Build.PL
is preferred by default when a package provides a Makefile.PL
and a Build.PL
.
The main macro of the Perl/CPAN package infrastructure is
perl-package
. It is similar to the generic-package
macro. The ability to
have target and host packages is also available, with the
host-perl-package
macro.
Just like the generic infrastructure, the Perl/CPAN infrastructure
works by defining a number of variables before calling the
perl-package
macro.
All the package metadata information variables that exist in the generic package infrastructure also exist in the Perl/CPAN infrastructure.
Note that setting PERL_FOO_INSTALL_STAGING
to YES
has no effect
unless a PERL_FOO_INSTALL_STAGING_CMDS
variable is defined. The perl
infrastructure doesn’t define these commands since Perl modules generally
don’t need to be installed to the staging
directory.
A few additional variables, specific to the Perl/CPAN infrastructure, can also be defined. Many of them are only useful in very specific cases, typical packages will therefore only use a few of them.
PERL_FOO_PREFER_INSTALLER
/HOST_PERL_FOO_PREFER_INSTALLER
,
specifies the preferred installation method. Possible values are
EUMM
(for Makefile.PL
based installation using
ExtUtils-MakeMaker
) and MB
(for Build.PL
based installation
using Module-Build
). This variable is only used when the package
provides both installation methods.
PERL_FOO_CONF_ENV
/HOST_PERL_FOO_CONF_ENV
, to specify additional
environment variables to pass to the perl Makefile.PL
or perl Build.PL
.
By default, empty.
PERL_FOO_CONF_OPTS
/HOST_PERL_FOO_CONF_OPTS
, to specify additional
configure options to pass to the perl Makefile.PL
or perl Build.PL
.
By default, empty.
PERL_FOO_BUILD_OPTS
/HOST_PERL_FOO_BUILD_OPTS
, to specify additional
options to pass to make pure_all
or perl Build build
in the build step.
By default, empty.
PERL_FOO_INSTALL_TARGET_OPTS
, to specify additional options to
pass to make pure_install
or perl Build install
in the install step.
By default, empty.
HOST_PERL_FOO_INSTALL_OPTS
, to specify additional options to
pass to make pure_install
or perl Build install
in the install step.
By default, empty.
In Buildroot, a virtual package is a package whose functionalities are provided by one or more packages, referred to as providers. The virtual package management is an extensible mechanism allowing the user to choose the provider used in the rootfs.
For example, OpenGL ES is an API for 2D and 3D graphics on embedded systems.
The implementation of this API is different for the Allwinner Tech Sunxi and
the Texas Instruments OMAP35xx platforms. So libgles
will be a virtual
package and sunxi-mali-utgard
and ti-gfx
will be the providers.
In the following example, we will explain how to add a new virtual package (something-virtual) and a provider for it (some-provider).
First, let’s create the virtual package.
The Config.in
file of virtual package something-virtual should contain:
01: config BR2_PACKAGE_HAS_SOMETHING_VIRTUAL 02: bool 03: 04: config BR2_PACKAGE_PROVIDES_SOMETHING_VIRTUAL 05: depends on BR2_PACKAGE_HAS_SOMETHING_VIRTUAL 06: string
In this file, we declare two options, BR2_PACKAGE_HAS_SOMETHING_VIRTUAL
and
BR2_PACKAGE_PROVIDES_SOMETHING_VIRTUAL
, whose values will be used by the
providers.
The .mk
for the virtual package should just evaluate the virtual-package
macro:
01: ################################################################################ 02: # 03: # something-virtual 04: # 05: ################################################################################ 06: 07: $(eval $(virtual-package))
The ability to have target and host packages is also available, with the
host-virtual-package
macro.
When adding a package as a provider, only the Config.in
file requires some
modifications.
The Config.in
file of the package some-provider, which provides the
functionalities of something-virtual, should contain:
01: config BR2_PACKAGE_SOME_PROVIDER 02: bool "some-provider" 03: select BR2_PACKAGE_HAS_SOMETHING_VIRTUAL 04: help 05: This is a comment that explains what some-provider is. 06: 07: http://foosoftware.org/some-provider/ 08: 09: if BR2_PACKAGE_SOME_PROVIDER 10: config BR2_PACKAGE_PROVIDES_SOMETHING_VIRTUAL 11: default "some-provider" 12: endif
On line 3, we select BR2_PACKAGE_HAS_SOMETHING_VIRTUAL
, and on line 11, we
set the value of BR2_PACKAGE_PROVIDES_SOMETHING_VIRTUAL
to the name of the
provider, but only if it is selected.
The .mk
file should also declare an additional variable
SOME_PROVIDER_PROVIDES
to contain the names of all the virtual
packages it is an implementation of:
01: SOME_PROVIDER_PROVIDES = something-virtual
Of course, do not forget to add the proper build and runtime dependencies for this package!
When adding a package that requires a certain FEATURE
provided by a virtual
package, you have to use depends on BR2_PACKAGE_HAS_FEATURE
, like so:
config BR2_PACKAGE_HAS_FEATURE bool config BR2_PACKAGE_FOO bool "foo" depends on BR2_PACKAGE_HAS_FEATURE
If your package really requires a specific provider, then you’ll have to
make your package depends on
this provider; you can not select
a
provider.
Let’s take an example with two providers for a FEATURE
:
config BR2_PACKAGE_HAS_FEATURE bool config BR2_PACKAGE_FOO bool "foo" select BR2_PACKAGE_HAS_FEATURE config BR2_PACKAGE_BAR bool "bar" select BR2_PACKAGE_HAS_FEATURE
And you are adding a package that needs FEATURE
as provided by foo
,
but not as provided by bar
.
If you were to use select BR2_PACKAGE_FOO
, then the user would still
be able to select BR2_PACKAGE_BAR
in the menuconfig. This would create
a configuration inconsistency, whereby two providers of the same FEATURE
would be enabled at once, one explicitly set by the user, the other
implicitly by your select
.
Instead, you have to use depends on BR2_PACKAGE_FOO
, which avoids any
implicit configuration inconsistency.
A popular way for a software package to handle user-specified
configuration is kconfig
. Among others, it is used by the Linux
kernel, Busybox, and Buildroot itself. The presence of a .config file
and a menuconfig
target are two well-known symptoms of kconfig being
used.
Buildroot features an infrastructure for packages that use kconfig for
their configuration. This infrastructure provides the necessary logic to
expose the package’s menuconfig
target as foo-menuconfig
in
Buildroot, and to handle the copying back and forth of the configuration
file in a correct way.
The main macro of the kconfig package infrastructure is
kconfig-package
. It is similar to the generic-package
macro.
Just like the generic infrastructure, the kconfig infrastructure works
by defining a number of variables before calling the kconfig-package
macro.
All the package metadata information variables that exist in the generic package infrastructure also exist in the kconfig infrastructure.
In order to use the kconfig-package
infrastructure for a Buildroot
package, the minimally required lines in the .mk
file, in addition to
the variables required by the generic-package
infrastructure, are:
FOO_KCONFIG_FILE = reference-to-source-configuration-file $(eval $(kconfig-package))
This snippet creates the following make targets:
foo-menuconfig
, which calls the package’s menuconfig
target
foo-update-config
, which copies the configuration back to the
source configuration file. It is not possible to use this target
when fragment files are set.
foo-update-defconfig
, which copies the configuration back to the
source configuration file. The configuration file will only list the
options that differ from the default values. It is not possible to
use this target when fragment files are set.
foo-diff-config
, which outputs the differences between the current
configuration and the one defined in the Buildroot configuration for
this kconfig package. The output is useful to identify the
configuration changes that may have to be propagated to
configuration fragments for example.
and ensures that the source configuration file is copied to the build directory at the right moment.
There are two options to specify a configuration file to use, either
FOO_KCONFIG_FILE
(as in the example, above) or FOO_KCONFIG_DEFCONFIG
.
It is mandatory to provide either, but not both:
FOO_KCONFIG_FILE
specifies the path to a defconfig or full-config file
to be used to configure the package.
FOO_KCONFIG_DEFCONFIG
specifies the defconfig make rule to call to
configure the package.
In addition to these minimally required lines, several optional variables can be set to suit the needs of the package under consideration:
FOO_KCONFIG_EDITORS
: a space-separated list of kconfig editors to
support, for example menuconfig xconfig. By default, menuconfig.
FOO_KCONFIG_FRAGMENT_FILES
: a space-separated list of configuration
fragment files that are merged to the main configuration file.
Fragment files are typically used when there is a desire to stay in sync
with an upstream (def)config file, with some minor modifications.
FOO_KCONFIG_OPTS
: extra options to pass when calling the kconfig
editors. This may need to include $(FOO_MAKE_OPTS), for example. By
default, empty.
FOO_KCONFIG_FIXUP_CMDS
: a list of shell commands needed to fixup the
configuration file after copying it or running a kconfig editor. Such
commands may be needed to ensure a configuration consistent with other
configuration of Buildroot, for example. By default, empty.
FOO_KCONFIG_DOTCONFIG
: path (with filename) of the .config
file,
relative to the package source tree. The default, .config
, should
be well suited for all packages that use the standard kconfig
infrastructure as inherited from the Linux kernel; some packages use
a derivative of kconfig that use a different location.
FOO_KCONFIG_DEPENDENCIES
: the list of packages (most probably, host
packages) that need to be built before this package’s kconfig is
interpreted. Seldom used. By default, empty.
FOO_KCONFIG_SUPPORTS_DEFCONFIG
: whether the package’s kconfig system
supports using defconfig files; few packages do not. By default, YES.
First, let’s see how to write a .mk
file for a rebar-based package,
with an example :
01: ################################################################################ 02: # 03: # erlang-foobar 04: # 05: ################################################################################ 06: 07: ERLANG_FOOBAR_VERSION = 1.0 08: ERLANG_FOOBAR_SOURCE = erlang-foobar-$(ERLANG_FOOBAR_VERSION).tar.xz 09: ERLANG_FOOBAR_SITE = http://www.foosoftware.org/download 10: ERLANG_FOOBAR_DEPENDENCIES = host-libaaa libbbb 11: 12: $(eval $(rebar-package))
On line 7, we declare the version of the package.
On line 8 and 9, we declare the name of the tarball (xz-ed tarball recommended) and the location of the tarball on the Web. Buildroot will automatically download the tarball from this location.
On line 10, we declare our dependencies, so that they are built before the build process of our package starts.
Finally, on line 12, we invoke the rebar-package
macro that
generates all the Makefile rules that actually allows the package to
be built.
The main macro of the rebar
package infrastructure is
rebar-package
. It is similar to the generic-package
macro. The
ability to have host packages is also available, with the
host-rebar-package
macro.
Just like the generic infrastructure, the rebar
infrastructure works
by defining a number of variables before calling the rebar-package
macro.
All the package metadata information variables that exist in the
generic package infrastructure also
exist in the rebar
infrastructure.
A few additional variables, specific to the rebar
infrastructure,
can also be defined. Many of them are only useful in very specific
cases, typical packages will therefore only use a few of them.
ERLANG_FOOBAR_USE_AUTOCONF
, to specify that the package uses
autoconf at the configuration step. When a package sets this
variable to YES
, the autotools
infrastructure is used.
Note. You can also use some of the variables from the autotools
infrastructure: ERLANG_FOOBAR_CONF_ENV
, ERLANG_FOOBAR_CONF_OPTS
,
ERLANG_FOOBAR_AUTORECONF
, ERLANG_FOOBAR_AUTORECONF_ENV
and
ERLANG_FOOBAR_AUTORECONF_OPTS
.
ERLANG_FOOBAR_USE_BUNDLED_REBAR
, to specify that the package has
a bundled version of rebar and that it shall be used. Valid
values are YES
or NO
(the default).
Note. If the package bundles a rebar utility, but can use the generic
one that Buildroot provides, just say NO
(i.e., do not specify
this variable). Only set if it is mandatory to use the rebar
utility bundled in this package.
ERLANG_FOOBAR_REBAR_ENV
, to specify additional environment
variables to pass to the rebar utility.
ERLANG_FOOBAR_KEEP_DEPENDENCIES
, to keep the dependencies
described in the rebar.config file. Valid values are YES
or NO
(the default). Unless this variable is set to YES
, the rebar
infrastructure removes such dependencies in a post-patch hook to
ensure rebar does not download nor compile them.
With the rebar infrastructure, all the steps required to build and install the packages are already defined, and they generally work well for most rebar-based packages. However, when required, it is still possible to customize what is done in any particular step:
.mk
file defines its
own ERLANG_FOOBAR_BUILD_CMDS
variable, it will be used instead
of the default rebar one. However, using this method should be
restricted to very specific cases. Do not use it in the general
case.
First, let’s see how to write a .mk
file for a Waf-based package, with
an example :
01: ################################################################################ 02: # 03: # libfoo 04: # 05: ################################################################################ 06: 07: LIBFOO_VERSION = 1.0 08: LIBFOO_SOURCE = libfoo-$(LIBFOO_VERSION).tar.gz 09: LIBFOO_SITE = http://www.foosoftware.org/download 10: LIBFOO_CONF_OPTS = --enable-bar --disable-baz 11: LIBFOO_DEPENDENCIES = bar 12: 13: $(eval $(waf-package))
On line 7, we declare the version of the package.
On line 8 and 9, we declare the name of the tarball (xz-ed tarball recommended) and the location of the tarball on the Web. Buildroot will automatically download the tarball from this location.
On line 10, we tell Buildroot what options to enable for libfoo.
On line 11, we tell Buildroot the dependencies of libfoo.
Finally, on line line 13, we invoke the waf-package
macro that generates all the Makefile rules that actually allows the
package to be built.
The main macro of the Waf package infrastructure is waf-package
.
It is similar to the generic-package
macro.
Just like the generic infrastructure, the Waf infrastructure works
by defining a number of variables before calling the waf-package
macro.
All the package metadata information variables that exist in the generic package infrastructure also exist in the Waf infrastructure.
A few additional variables, specific to the Waf infrastructure, can also be defined.
LIBFOO_SUBDIR
may contain the name of a subdirectory inside the
package that contains the main wscript file. This is useful,
if for example, the main wscript file is not at the root of
the tree extracted by the tarball. If HOST_LIBFOO_SUBDIR
is not
specified, it defaults to LIBFOO_SUBDIR
.
LIBFOO_NEEDS_EXTERNAL_WAF
can be set to YES
or NO
to tell
Buildroot to use the bundled waf
executable. If set to NO
, the
default, then Buildroot will use the waf executable provided in the
package source tree; if set to YES
, then Buildroot will download,
install waf as a host tool and use it to build the package.
LIBFOO_WAF_OPTS
, to specify additional options to pass to the
waf
script at every step of the package build process: configure,
build and installation. By default, empty.
LIBFOO_CONF_OPTS
, to specify additional options to pass to the
waf
script for the configuration step. By default, empty.
LIBFOO_BUILD_OPTS
, to specify additional options to pass to the
waf
script during the build step. By default, empty.
LIBFOO_INSTALL_STAGING_OPTS
, to specify additional options to pass
to the waf
script during the staging installation step. By default,
empty.
LIBFOO_INSTALL_TARGET_OPTS
, to specify additional options to pass
to the waf
script during the target installation step. By default,
empty.
Meson is an open source build system meant to be both extremely fast, and, even more importantly, as user friendly as possible. It uses Ninja as a companion tool to perform the actual build operations.
Let’s see how to write a .mk
file for a Meson-based package, with an example:
01: ################################################################################ 02: # 03: # foo 04: # 05: ################################################################################ 06: 07: FOO_VERSION = 1.0 08: FOO_SOURCE = foo-$(FOO_VERSION).tar.gz 09: FOO_SITE = http://www.foosoftware.org/download 10: FOO_LICENSE = GPL-3.0+ 11: FOO_LICENSE_FILES = COPYING 12: FOO_INSTALL_STAGING = YES 13: 14: FOO_DEPENDENCIES = host-pkgconf bar 15: 16: ifeq ($(BR2_PACKAGE_BAZ),y) 17: FOO_CONF_OPTS += -Dbaz=true 18: FOO_DEPENDENCIES += baz 19: else 20: FOO_CONF_OPTS += -Dbaz=false 21: endif 22: 23: $(eval $(meson-package))
The Makefile starts with the definition of the standard variables for package declaration (lines 7 to 11).
On line line 23, we invoke the meson-package
macro that generates all the
Makefile rules that actually allows the package to be built.
In the example, host-pkgconf
and bar
are declared as dependencies in
FOO_DEPENDENCIES
at line 14 because the Meson build file of foo
uses
pkg-config
to determine the compilation flags and libraries of package bar
.
Note that it is not necessary to add host-meson
in the FOO_DEPENDENCIES
variable of a package, since this basic dependency is automatically added as
needed by the Meson package infrastructure.
If the "baz" package is selected, then support for the "baz" feature in "foo" is
activated by adding -Dbaz=true
to FOO_CONF_OPTS
at line 17, as specified in
the meson_options.txt
file in "foo" source tree. The "baz" package is also
added to FOO_DEPENDENCIES
. Note that the support for baz
is explicitly
disabled at line 20, if the package is not selected.
To sum it up, to add a new meson-based package, the Makefile example can be
copied verbatim then edited to replace all occurrences of FOO
with the
uppercase name of the new package and update the values of the standard
variables.
The main macro of the Meson package infrastructure is meson-package
. It is
similar to the generic-package
macro. The ability to have target and host
packages is also available, with the host-meson-package
macro.
Just like the generic infrastructure, the Meson infrastructure works by defining
a number of variables before calling the meson-package
macro.
All the package metadata information variables that exist in the generic package infrastructure also exist in the Meson infrastructure.
A few additional variables, specific to the Meson infrastructure, can also be defined. Many of them are only useful in very specific cases, typical packages will therefore only use a few of them.
FOO_SUBDIR
may contain the name of a subdirectory inside the
package that contains the main meson.build file. This is useful,
if for example, the main meson.build file is not at the root of
the tree extracted by the tarball. If HOST_FOO_SUBDIR
is not
specified, it defaults to FOO_SUBDIR
.
FOO_CONF_ENV
, to specify additional environment variables to pass to
meson
for the configuration step. By default, empty.
FOO_CONF_OPTS
, to specify additional options to pass to meson
for the
configuration step. By default, empty.
FOO_CFLAGS
, to specify compiler arguments added to the package specific
cross-compile.conf
file c_args
property. By default, the value of
TARGET_CFLAGS
.
FOO_CXXFLAGS
, to specify compiler arguments added to the package specific
cross-compile.conf
file cpp_args
property. By default, the value of
TARGET_CXXFLAGS
.
FOO_LDFLAGS
, to specify compiler arguments added to the package specific
cross-compile.conf
file c_link_args
and cpp_link_args
properties. By
default, the value of TARGET_LDFLAGS
.
FOO_MESON_EXTRA_BINARIES
, to specify a space-separated list of programs
to add to the [binaries]
section of the meson cross-compilation.conf
configuration file. The format is program-name='/path/to/program'
, with
no space around the =
sign, and with the path of the program between
single quotes. By default, empty. Note that Buildroot already sets the
correct values for c
, cpp
, ar
, strip
, and pkgconfig
.
FOO_MESON_EXTRA_PROPERTIES
, to specify a space-separated list of
properties to add to the [properties]
section of the meson
cross-compilation.conf
configuration file. The format is
property-name=<value>
with no space around the =
sign, and with
single quotes around string values. By default, empty. Note that
Buildroot already sets values for needs_exe_wrapper
, c_args
,
c_link_args
, cpp_args
, cpp_link_args
, sys_root
, and
pkg_config_libdir
.
FOO_NINJA_ENV
, to specify additional environment variables to pass to
ninja
, meson companion tool in charge of the build operations. By default,
empty.
FOO_NINJA_OPTS
, to specify a space-separated list of targets to build. By
default, empty, to build the default target(s).
Cargo is the package manager for the Rust programming language. It allows the user to build programs or libraries written in Rust, but it also downloads and manages their dependencies, to ensure repeatable builds. Cargo packages are called "crates".
The Config.in
file of Cargo-based package foo should contain:
01: config BR2_PACKAGE_FOO 02: bool "foo" 03: depends on BR2_PACKAGE_HOST_RUSTC_TARGET_ARCH_SUPPORTS 04: select BR2_PACKAGE_HOST_RUSTC 05: help 06: This is a comment that explains what foo is. 07: 08: http://foosoftware.org/foo/
And the .mk
file for this package should contain:
01: ################################################################################ 02: # 03: # foo 04: # 05: ################################################################################ 06: 07: FOO_VERSION = 1.0 08: FOO_SOURCE = foo-$(FOO_VERSION).tar.gz 09: FOO_SITE = http://www.foosoftware.org/download 10: FOO_LICENSE = GPL-3.0+ 11: FOO_LICENSE_FILES = COPYING 12: 13: $(eval $(cargo-package))
The Makefile starts with the definition of the standard variables for package declaration (lines 7 to 11).
As seen in line 13, it is based on the cargo-package
infrastructure. Cargo will be invoked automatically by this
infrastructure to build and install the package.
It is still possible to define custom build commands or install commands (i.e. with FOO_BUILD_CMDS and FOO_INSTALL_TARGET_CMDS). Those will then replace the commands from the cargo infrastructure.
The main macros for the Cargo package infrastructure are
cargo-package
for target packages and host-cargo-package
for host
packages.
Just like the generic infrastructure, the Cargo infrastructure works
by defining a number of variables before calling the cargo-package
or host-cargo-package
macros.
All the package metadata information variables that exist in the generic package infrastructure also exist in the Cargo infrastructure.
A few additional variables, specific to the Cargo infrastructure, can also be defined. Many of them are only useful in very specific cases, typical packages will therefore only use a few of them.
FOO_SUBDIR
may contain the name of a subdirectory inside the package
that contains the Cargo.toml file. This is useful, if for example, it
is not at the root of the tree extracted by the tarball. If
HOST_FOO_SUBDIR
is not specified, it defaults to FOO_SUBDIR
.
FOO_CARGO_ENV
can be used to pass additional variables in the
environment of cargo
invocations. It used at both build and
installation time
FOO_CARGO_BUILD_OPTS
can be used to pass additional options to
cargo
at build time.
FOO_CARGO_INSTALL_OPTS
can be used to pass additional options to
cargo
at install time.
A crate can depend on other libraries from crates.io or git
repositories, listed in its Cargo.toml
file. Buildroot automatically
takes care of downloading such dependencies as part of the download
step of packages that use the cargo-package
infrastructure. Such
dependencies are then kept together with the package source code in
the tarball cached in Buildroot’s DL_DIR
, and therefore the hash of
the package’s tarball doesn’t only cover the source of the package
itself, but also covers the sources of the dependencies. Thus, a change
injected into one of the dependencies will also be discovered by the
hash check. In addition, this mechanism allows the build to be
performed completely offline since cargo will not do any downloads
during the build. This mechanism is called vendoring the dependencies.
This infrastructure applies to Go packages that use the standard build system and use bundled dependencies.
First, let’s see how to write a .mk
file for a go package,
with an example :
01: ################################################################################ 02: # 03: # foo 04: # 05: ################################################################################ 06: 07: FOO_VERSION = 1.0 08: FOO_SITE = $(call github,bar,foo,$(FOO_VERSION)) 09: FOO_LICENSE = BSD-3-Clause 10: FOO_LICENSE_FILES = LICENSE 11: 12: $(eval $(golang-package))
On line 7, we declare the version of the package.
On line 8, we declare the upstream location of the package, here fetched from Github, since a large number of Go packages are hosted on Github.
On line 9 and 10, we give licensing details about the package.
Finally, on line 12, we invoke the golang-package
macro that
generates all the Makefile rules that actually allow the package to be
built.
In their Config.in
file, packages using the golang-package
infrastructure should depend on BR2_PACKAGE_HOST_GO_TARGET_ARCH_SUPPORTS
because Buildroot will automatically add a dependency on host-go
to such packages.
If you need CGO support in your package, you must add a dependency on
BR2_PACKAGE_HOST_GO_TARGET_CGO_LINKING_SUPPORTS
.
The main macro of the Go package infrastructure is
golang-package
. It is similar to the generic-package
macro. The
ability to build host packages is also available, with the
host-golang-package
macro.
Host packages built by host-golang-package
macro should depend on
BR2_PACKAGE_HOST_GO_HOST_ARCH_SUPPORTS
.
Just like the generic infrastructure, the Go infrastructure works
by defining a number of variables before calling the golang-package
macro.
All the package metadata information variables that exist in the generic package infrastructure also exist in the Go infrastructure.
Note that it is not necessary to add host-go
in the
FOO_DEPENDENCIES
variable of a package, since this basic dependency
is automatically added as needed by the Go package infrastructure.
A few additional variables, specific to the Go infrastructure, can optionally be defined, depending on the package’s needs. Many of them are only useful in very specific cases, typical packages will therefore only use a few of them, or none.
FOO_GOMOD
variable. If not specified, it defaults to
URL-domain/1st-part-of-URL/2nd-part-of-URL
, e.g FOO_GOMOD
will
take the value github.com/bar/foo
for a package that specifies
FOO_SITE = $(call github,bar,foo,$(FOO_VERSION))
. The Go package
infrastructure will automatically generate a minimal go.mod
file
in the package source tree if it doesn’t exist.
FOO_LDFLAGS
and FOO_TAGS
can be used to pass respectively the
LDFLAGS
or the TAGS
to the go
build command.
FOO_BUILD_TARGETS
can be used to pass the list of targets that
should be built. If FOO_BUILD_TARGETS
is not specified, it
defaults to .
. We then have two cases:
FOO_BUILD_TARGETS
is .
. In this case, we assume only one binary
will be produced, and that by default we name it after the package
name. If that is not appropriate, the name of the produced binary
can be overridden using FOO_BIN_NAME
.
FOO_BUILD_TARGETS
is not .
. In this case, we iterate over the
values to build each target, and for each produced a binary that is
the non-directory component of the target. For example if
FOO_BUILD_TARGETS = cmd/docker cmd/dockerd
the binaries produced
are docker
and dockerd
.
FOO_INSTALL_BINS
can be used to pass the list of binaries that
should be installed in /usr/bin
on the target. If
FOO_INSTALL_BINS
is not specified, it defaults to the lower-case
name of package.
With the Go infrastructure, all the steps required to build and install the packages are already defined, and they generally work well for most Go-based packages. However, when required, it is still possible to customize what is done in any particular step:
.mk
file defines its own
FOO_BUILD_CMDS
variable, it will be used instead of the default Go
one. However, using this method should be restricted to very
specific cases. Do not use it in the general case.
A Go package can depend on other Go modules, listed in its go.mod
file. Buildroot automatically takes care of downloading such
dependencies as part of the download step of packages that use the
golang-package
infrastructure. Such dependencies are then kept
together with the package source code in the tarball cached in
Buildroot’s DL_DIR
, and therefore the hash of the package’s tarball
includes such dependencies.
This mechanism ensures that any change in the dependencies will be detected, and allows the build to be performed completely offline.
First, let’s see how to write a .mk
file for a QMake-based package, with
an example :
01: ################################################################################ 02: # 03: # libfoo 04: # 05: ################################################################################ 06: 07: LIBFOO_VERSION = 1.0 08: LIBFOO_SOURCE = libfoo-$(LIBFOO_VERSION).tar.gz 09: LIBFOO_SITE = http://www.foosoftware.org/download 10: LIBFOO_CONF_OPTS = QT_CONFIG+=bar QT_CONFIG-=baz 11: LIBFOO_DEPENDENCIES = bar 12: 13: $(eval $(qmake-package))
On line 7, we declare the version of the package.
On line 8 and 9, we declare the name of the tarball (xz-ed tarball recommended) and the location of the tarball on the Web. Buildroot will automatically download the tarball from this location.
On line 10, we tell Buildroot what options to enable for libfoo.
On line 11, we tell Buildroot the dependencies of libfoo.
Finally, on line line 13, we invoke the qmake-package
macro that generates all the Makefile rules that actually allows the
package to be built.
The main macro of the QMake package infrastructure is qmake-package
.
It is similar to the generic-package
macro.
Just like the generic infrastructure, the QMake infrastructure works
by defining a number of variables before calling the qmake-package
macro.
All the package metadata information variables that exist in the generic package infrastructure also exist in the QMake infrastructure.
A few additional variables, specific to the QMake infrastructure, can also be defined.
LIBFOO_CONF_ENV
, to specify additional environment variables to
pass to the qmake
script for the configuration step. By default, empty.
LIBFOO_CONF_OPTS
, to specify additional options to pass to the
qmake
script for the configuration step. By default, empty.
LIBFOO_MAKE_ENV
, to specify additional environment variables to the
make
command during the build and install steps. By default, empty.
LIBFOO_MAKE_OPTS
, to specify additional targets to pass to the
make
command during the build step. By default, empty.
LIBFOO_INSTALL_STAGING_OPTS
, to specify additional targets to pass
to the make
command during the staging installation step. By default,
install
.
LIBFOO_INSTALL_TARGET_OPTS
, to specify additional targets to pass
to the make
command during the target installation step. By default,
install
.
LIBFOO_SYNC_QT_HEADERS
, to run syncqt.pl before qmake. Some packages
need this to have a properly populated include directory before
running the build.
Buildroot offers a helper infrastructure to make it easy to write packages that build and install Linux kernel modules. Some packages only contain a kernel module, other packages contain programs and libraries in addition to kernel modules. Buildroot’s helper infrastructure supports either case.
Let’s start with an example on how to prepare a simple package that only builds a kernel module, and no other component:
01: ################################################################################ 02: # 03: # foo 04: # 05: ################################################################################ 06: 07: FOO_VERSION = 1.2.3 08: FOO_SOURCE = foo-$(FOO_VERSION).tar.xz 09: FOO_SITE = http://www.foosoftware.org/download 10: FOO_LICENSE = GPL-2.0 11: FOO_LICENSE_FILES = COPYING 12: 13: $(eval $(kernel-module)) 14: $(eval $(generic-package))
Lines 7-11 define the usual meta-data to specify the version, archive name, remote URI where to find the package source, licensing information.
On line 13, we invoke the kernel-module
helper infrastructure, that
generates all the appropriate Makefile rules and variables to build
that kernel module.
Finally, on line 14, we invoke the
generic-package
infrastructure.
The dependency on linux
is automatically added, so it is not needed to
specify it in FOO_DEPENDENCIES
.
What you may have noticed is that, unlike other package infrastructures,
we explicitly invoke a second infrastructure. This allows a package to
build a kernel module, but also, if needed, use any one of other package
infrastructures to build normal userland components (libraries,
executables…). Using the kernel-module
infrastructure on its own is
not sufficient; another package infrastructure must be used.
Let’s look at a more complex example:
01: ################################################################################ 02: # 03: # foo 04: # 05: ################################################################################ 06: 07: FOO_VERSION = 1.2.3 08: FOO_SOURCE = foo-$(FOO_VERSION).tar.xz 09: FOO_SITE = http://www.foosoftware.org/download 10: FOO_LICENSE = GPL-2.0 11: FOO_LICENSE_FILES = COPYING 12: 13: FOO_MODULE_SUBDIRS = driver/base 14: FOO_MODULE_MAKE_OPTS = KVERSION=$(LINUX_VERSION_PROBED) 15: 16: ifeq ($(BR2_PACKAGE_LIBBAR),y) 17: FOO_DEPENDENCIES += libbar 18: FOO_CONF_OPTS += --enable-bar 19: FOO_MODULE_SUBDIRS += driver/bar 20: else 21: FOO_CONF_OPTS += --disable-bar 22: endif 23: 24: $(eval $(kernel-module)) 26: $(eval $(autotools-package))
Here, we see that we have an autotools-based package, that also builds
the kernel module located in sub-directory driver/base
and, if libbar
is enabled, the kernel module located in sub-directory driver/bar
, and
defines the variable KVERSION
to be passed to the Linux buildsystem
when building the module(s).
The main macro for the kernel module infrastructure is kernel-module
.
Unlike other package infrastructures, it is not stand-alone, and requires
any of the other *-package
macros be called after it.
The kernel-module
macro defines post-build and post-target-install
hooks to build the kernel modules. If the package’s .mk
needs access
to the built kernel modules, it should do so in a post-build hook,
registered after the call to kernel-module
. Similarly, if the
package’s .mk
needs access to the kernel module after it has been
installed, it should do so in a post-install hook, registered after
the call to kernel-module
. Here’s an example:
$(eval $(kernel-module)) define FOO_DO_STUFF_WITH_KERNEL_MODULE # Do something with it... endef FOO_POST_BUILD_HOOKS += FOO_DO_STUFF_WITH_KERNEL_MODULE $(eval $(generic-package))
Finally, unlike the other package infrastructures, there is no
host-kernel-module
variant to build a host kernel module.
The following additional variables can optionally be defined to further configure the build of the kernel module:
FOO_MODULE_SUBDIRS
may be set to one or more sub-directories (relative
to the package source top-directory) where the kernel module sources are.
If empty or not set, the sources for the kernel module(s) are considered
to be located at the top of the package source tree.
FOO_MODULE_MAKE_OPTS
may be set to contain extra variable definitions
to pass to the Linux buildsystem.
You may also reference (but you may not set!) those variables:
LINUX_DIR
contains the path to where the Linux kernel has been
extracted and built.
LINUX_VERSION
contains the version string as configured by the user.
LINUX_VERSION_PROBED
contains the real version string of the kernel,
retrieved with running make -C $(LINUX_DIR) kernelrelease
KERNEL_ARCH
contains the name of the current architecture, like arm
,
mips
…
The Buildroot manual, which you are currently reading, is entirely written using the AsciiDoc mark-up syntax. The manual is then rendered to many formats:
Although Buildroot only contains one document written in AsciiDoc, there is, as for packages, an infrastructure for rendering documents using the AsciiDoc syntax.
Also as for packages, the AsciiDoc infrastructure is available from a br2-external tree. This allows documentation for a br2-external tree to match the Buildroot documentation, as it will be rendered to the same formats and use the same layout and theme.
Whereas package infrastructures are suffixed with -package
, the document
infrastructures are suffixed with -document
. So, the AsciiDoc infrastructure
is named asciidoc-document
.
Here is an example to render a simple AsciiDoc document.
01: ################################################################################ 02: # 03: # foo-document 04: # 05: ################################################################################ 06: 07: FOO_SOURCES = $(sort $(wildcard $(FOO_DOCDIR)/*)) 08: $(eval $(call asciidoc-document))
On line 7, the Makefile declares what the sources of the document are. Currently, it is expected that the document’s sources are only local; Buildroot will not attempt to download anything to render a document. Thus, you must indicate where the sources are. Usually, the string above is sufficient for a document with no sub-directory structure.
On line 8, we call the asciidoc-document
function, which generates all
the Makefile code necessary to render the document.
The list of variables that can be set in a .mk
file to give metadata
information is (assuming the document name is foo
) :
FOO_SOURCES
, mandatory, defines the source files for the document.
FOO_RESOURCES
, optional, may contain a space-separated list of paths
to one or more directories containing so-called resources (like CSS or
images). By default, empty.
FOO_DEPENDENCIES
, optional, the list of packages (most probably,
host-packages) that must be built before building this document.
FOO_TOC_DEPTH
, FOO_TOC_DEPTH_<FMT>
, optionals, the depth of the
table of content for this document, which can be overridden for the
specified format <FMT>
(see the list of rendered formats, above,
but in uppercase, and with dash replaced by underscore; see example,
below). By default: 1
.
There are also additional hooks (see Section 18.23, “Hooks available in the various build steps” for general information on hooks), that a document may set to define extra actions to be done at various steps:
FOO_POST_RSYNC_HOOKS
to run additional commands after the sources
have been copied by Buildroot. This can for example be used to
generate part of the manual with information extracted from the
tree. As an example, Buildroot uses this hook to generate the tables
in the appendices.
FOO_CHECK_DEPENDENCIES_HOOKS
to run additional tests on required
components to generate the document. In AsciiDoc, it is possible to
call filters, that is, programs that will parse an AsciiDoc block and
render it appropriately (e.g. ditaa or
aafigure).
FOO_CHECK_DEPENDENCIES_<FMT>_HOOKS
, to run additional tests for
the specified format <FMT>
(see the list of rendered formats, above).
Buildroot sets the following variable that can be used in the definitions above:
$(FOO_DOCDIR)
, similar to $(FOO_PKGDIR)
, contains the path to the
directory containing foo.mk
. It can be used to refer to the document
sources, and can be used in the hooks, especially the post-rsync hook
if parts of the documentation needs to be generated.
$(@D)
, as for traditional packages, contains the path to the directory
where the document will be copied and built.
Here is a complete example that uses all variables and all hooks:
01: ################################################################################ 02: # 03: # foo-document 04: # 05: ################################################################################ 06: 07: FOO_SOURCES = $(sort $(wildcard $(FOO_DOCDIR)/*)) 08: FOO_RESOURCES = $(sort $(wildcard $(FOO_DOCDIR)/resources)) 09: 10: FOO_TOC_DEPTH = 2 11: FOO_TOC_DEPTH_HTML = 1 12: FOO_TOC_DEPTH_SPLIT_HTML = 3 13: 14: define FOO_GEN_EXTRA_DOC 15: /path/to/generate-script --outdir=$(@D) 16: endef 17: FOO_POST_RSYNC_HOOKS += FOO_GEN_EXTRA_DOC 18: 19: define FOO_CHECK_MY_PROG 20: if ! which my-prog >/dev/null 2>&1; then \ 21: echo "You need my-prog to generate the foo document"; \ 22: exit 1; \ 23: fi 24: endef 25: FOO_CHECK_DEPENDENCIES_HOOKS += FOO_CHECK_MY_PROG 26: 27: define FOO_CHECK_MY_OTHER_PROG 28: if ! which my-other-prog >/dev/null 2>&1; then \ 29: echo "You need my-other-prog to generate the foo document as PDF"; \ 30: exit 1; \ 31: fi 32: endef 33: FOO_CHECK_DEPENDENCIES_PDF_HOOKS += FOO_CHECK_MY_OTHER_PROG 34: 35: $(eval $(call asciidoc-document))
The Linux kernel package can use some specific infrastructures based on package hooks for building Linux kernel tools or/and building Linux kernel extensions.
Buildroot offers a helper infrastructure to build some userspace tools
for the target available within the Linux kernel sources. Since their
source code is part of the kernel source code, a special package,
linux-tools
, exists and re-uses the sources of the Linux kernel that
runs on the target.
Let’s look at an example of a Linux tool. For a new Linux tool named
foo
, create a new menu entry in the existing
package/linux-tools/Config.in
. This file will contain the option
descriptions related to each kernel tool that will be used and
displayed in the configuration tool. It would basically look like:
01: config BR2_PACKAGE_LINUX_TOOLS_FOO 02: bool "foo" 03: select BR2_PACKAGE_LINUX_TOOLS 04: help 05: This is a comment that explains what foo kernel tool is. 06: 07: http://foosoftware.org/foo/
The name of the option starts with the prefix BR2_PACKAGE_LINUX_TOOLS_
,
followed by the uppercase name of the tool (like is done for packages).
Note. Unlike other packages, the linux-tools
package options appear in the
linux
kernel menu, under the Linux Kernel Tools
sub-menu, not under
the Target packages
main menu.
Then for each linux tool, add a new .mk.in
file named
package/linux-tools/linux-tool-foo.mk.in
. It would basically look like:
01: ################################################################################ 02: # 03: # foo 04: # 05: ################################################################################ 06: 07: LINUX_TOOLS += foo 08: 09: FOO_DEPENDENCIES = libbbb 10: 11: define FOO_BUILD_CMDS 12: $(TARGET_MAKE_ENV) $(MAKE) -C $(LINUX_DIR)/tools foo 13: endef 14: 15: define FOO_INSTALL_STAGING_CMDS 16: $(TARGET_MAKE_ENV) $(MAKE) -C $(LINUX_DIR)/tools \ 17: DESTDIR=$(STAGING_DIR) \ 18: foo_install 19: endef 20: 21: define FOO_INSTALL_TARGET_CMDS 22: $(TARGET_MAKE_ENV) $(MAKE) -C $(LINUX_DIR)/tools \ 23: DESTDIR=$(TARGET_DIR) \ 24: foo_install 25: endef
On line 7, we register the Linux tool foo
to the list of available
Linux tools.
On line 9, we specify the list of dependencies this tool relies on. These
dependencies are added to the Linux package dependencies list only when the
foo
tool is selected.
The rest of the Makefile, lines 11-25 defines what should be done at the
different steps of the Linux tool build process like for a
generic package
. They will actually be
used only when the foo
tool is selected. The only supported commands are
_BUILD_CMDS
, _INSTALL_STAGING_CMDS
and _INSTALL_TARGET_CMDS
.
Note. One must not call $(eval $(generic-package))
or any other
package infrastructure! Linux tools are not packages by themselves,
they are part of the linux-tools
package.
Some packages provide new features that require the Linux kernel tree
to be modified. This can be in the form of patches to be applied on
the kernel tree, or in the form of new files to be added to the
tree. The Buildroot’s Linux kernel extensions infrastructure provides
a simple solution to automatically do this, just after the kernel
sources are extracted and before the kernel patches are
applied. Examples of extensions packaged using this mechanism are the
real-time extensions Xenomai and RTAI, as well as the set of
out-of-tree LCD screens drivers fbtft
.
Let’s look at an example on how to add a new Linux extension foo
.
First, create the package foo
that provides the extension: this
package is a standard package; see the previous chapters on how to
create such a package. This package is in charge of downloading the
sources archive, checking the hash, defining the licence information
and building user space tools if any.
Then create the Linux extension proper: create a new menu entry in
the existing linux/Config.ext.in
. This file contains the option
descriptions related to each kernel extension that will be used and
displayed in the configuration tool. It would basically look like:
01: config BR2_LINUX_KERNEL_EXT_FOO 02: bool "foo" 03: help 04: This is a comment that explains what foo kernel extension is. 05: 06: http://foosoftware.org/foo/
Then for each linux extension, add a new .mk
file named
linux/linux-ext-foo.mk
. It should basically contain:
01: ################################################################################ 02: # 03: # foo 04: # 05: ################################################################################ 06: 07: LINUX_EXTENSIONS += foo 08: 09: define FOO_PREPARE_KERNEL 10: $(FOO_DIR)/prepare-kernel-tree.sh --linux-dir=$(@D) 11: endef
On line 7, we add the Linux extension foo
to the list of available
Linux extensions.
On line 9-11, we define what should be done by the extension to modify
the Linux kernel tree; this is specific to the linux extension and can
use the variables defined by the foo
package, like: $(FOO_DIR)
or
$(FOO_VERSION)
… as well as all the Linux variables, like:
$(LINUX_VERSION)
or $(LINUX_VERSION_PROBED)
, $(KERNEL_ARCH)
…
See the definition of those kernel variables.
The generic infrastructure (and as a result also the derived autotools
and cmake infrastructures) allow packages to specify hooks.
These define further actions to perform after existing steps.
Most hooks aren’t really useful for generic packages, since the .mk
file already has full control over the actions performed in each step
of the package construction.
The following hook points are available:
LIBFOO_PRE_DOWNLOAD_HOOKS
LIBFOO_POST_DOWNLOAD_HOOKS
LIBFOO_PRE_EXTRACT_HOOKS
LIBFOO_POST_EXTRACT_HOOKS
LIBFOO_PRE_RSYNC_HOOKS
LIBFOO_POST_RSYNC_HOOKS
LIBFOO_PRE_PATCH_HOOKS
LIBFOO_POST_PATCH_HOOKS
LIBFOO_PRE_CONFIGURE_HOOKS
LIBFOO_POST_CONFIGURE_HOOKS
LIBFOO_PRE_BUILD_HOOKS
LIBFOO_POST_BUILD_HOOKS
LIBFOO_PRE_INSTALL_HOOKS
(for host packages only)
LIBFOO_POST_INSTALL_HOOKS
(for host packages only)
LIBFOO_PRE_INSTALL_STAGING_HOOKS
(for target packages only)
LIBFOO_POST_INSTALL_STAGING_HOOKS
(for target packages only)
LIBFOO_PRE_INSTALL_TARGET_HOOKS
(for target packages only)
LIBFOO_POST_INSTALL_TARGET_HOOKS
(for target packages only)
LIBFOO_PRE_INSTALL_IMAGES_HOOKS
LIBFOO_POST_INSTALL_IMAGES_HOOKS
LIBFOO_PRE_LEGAL_INFO_HOOKS
LIBFOO_POST_LEGAL_INFO_HOOKS
LIBFOO_TARGET_FINALIZE_HOOKS
These variables are lists of variable names containing actions to be performed at this hook point. This allows several hooks to be registered at a given hook point. Here is an example:
define LIBFOO_POST_PATCH_FIXUP action1 action2 endef LIBFOO_POST_PATCH_HOOKS += LIBFOO_POST_PATCH_FIXUP
The POST_RSYNC
hook is run only for packages that use a local source,
either through the local
site method or the OVERRIDE_SRCDIR
mechanism. In this case, package sources are copied using rsync
from
the local location into the buildroot build directory. The rsync
command does not copy all files from the source directory, though.
Files belonging to a version control system, like the directories
.git
, .hg
, etc. are not copied. For most packages this is
sufficient, but a given package can perform additional actions using
the POST_RSYNC
hook.
In principle, the hook can contain any command you want. One specific
use case, though, is the intentional copying of the version control
directory using rsync
. The rsync
command you use in the hook can, among
others, use the following variables:
$(SRCDIR)
: the path to the overridden source directory
$(@D)
: the path to the build directory
Many packages that support internationalization use the gettext library. Dependencies for this library are fairly complicated and therefore, deserve some explanation.
The glibc C library integrates a full-blown implementation of gettext, supporting translation. Native Language Support is therefore built-in in glibc.
On the other hand, the uClibc and musl C libraries only provide a
stub implementation of the gettext functionality, which allows to
compile libraries and programs using gettext functions, but without
providing the translation capabilities of a full-blown gettext
implementation. With such C libraries, if real Native Language Support
is necessary, it can be provided by the libintl
library of the
gettext
package.
Due to this, and in order to make sure that Native Language Support is properly handled, packages in Buildroot that can use NLS support should:
BR2_SYSTEM_ENABLE_NLS=y
. This
is done automatically for autotools packages and therefore should
only be done for packages using other package infrastructures.
$(TARGET_NLS_DEPENDENCIES)
to the package
<pkg>_DEPENDENCIES
variable. This addition should be done
unconditionally: the value of this variable is automatically
adjusted by the core infrastructure to contain the relevant list of
packages. If NLS support is disabled, this variable is empty. If
NLS support is enabled, this variable contains host-gettext
so
that tools needed to compile translation files are available on the
host. In addition, if uClibc or musl are used, this variable
also contains gettext
in order to get the full-blown gettext
implementation.
$(TARGET_NLS_LIBS)
to the linker flags, so that
the package gets linked with libintl
. This is generally not
needed with autotools packages as they usually detect
automatically that they should link with libintl
. However,
packages using other build systems, or problematic autotools-based
packages may need this. $(TARGET_NLS_LIBS)
should be added
unconditionally to the linker flags, as the core automatically
makes it empty or defined to -lintl
depending on the
configuration.
No changes should be made to the Config.in
file to support NLS.
Finally, certain packages need some gettext utilities on the target,
such as the gettext
program itself, which allows to retrieve
translated strings, from the command line. In such a case, the package
should:
select BR2_PACKAGE_GETTEXT
in their Config.in
file,
indicating in a comment above that it’s a runtime dependency only.
gettext
dependency in the DEPENDENCIES
variable of
their .mk
file.
In Buildroot, there is some relationship between:
*.mk
file);
Config.in
file;
It is mandatory to maintain consistency between these elements, using the following rules:
*.mk
name are the package name
itself (e.g.: package/foo-bar_boo/foo-bar_boo.mk
);
foo-bar_boo
);
.
and -
characters substituted with _
, prefixed with BR2_PACKAGE_
(e.g.:
BR2_PACKAGE_FOO_BAR_BOO
);
*.mk
file variable prefix is the upper case package name
with .
and -
characters substituted with _
(e.g.:
FOO_BAR_BOO_VERSION
).
Buildroot provides a script in utils/check-package
that checks new or
changed files for coding style. It is not a complete language validator,
but it catches many common mistakes. It is meant to run in the actual
files you created or modified, before creating the patch for submission.
This script can be used for packages, filesystem makefiles, Config.in files, etc. It does not check the files defining the package infrastructures and some other files containing similar common code.
To use it, run the check-package
script, by telling which files you
created or changed:
$ ./utils/check-package package/new-package/*
If you have the utils
directory in your path you can also run:
$ cd package/new-package/ $ check-package *
The tool can also be used for packages in a br2-external:
$ check-package -b /path/to/br2-ext-tree/package/my-package/*
The check-package
script requires you install shellcheck
and the
Python PyPi packages flake8
and python-magic
. The Buildroot code
base is currently tested against version 0.7.1 of ShellCheck. If you
use a different version of ShellCheck, you may see additional,
unfixed, warnings.
If you have Docker or Podman you can run check-package
without
installing dependencies:
$ ./utils/docker-run ./utils/check-package
Once you have added your new package, it is important that you test it under various conditions: does it build for all architectures? Does it build with the different C libraries? Does it need threads, NPTL? And so on…
Buildroot runs autobuilders which
continuously test random configurations. However, these only build the
master
branch of the git tree, and your new fancy package is not yet
there.
Buildroot provides a script in utils/test-pkg
that uses the same base
configurations as used by the autobuilders so you can test your package
in the same conditions.
First, create a config snippet that contains all the necessary options
needed to enable your package, but without any architecture or toolchain
option. For example, let’s create a config snippet that just enables
libcurl
, without any TLS backend:
$ cat libcurl.config BR2_PACKAGE_LIBCURL=y
If your package needs more configuration options, you can add them to the
config snippet. For example, here’s how you would test libcurl
with
openssl
as a TLS backend and the curl
program:
$ cat libcurl.config BR2_PACKAGE_LIBCURL=y BR2_PACKAGE_LIBCURL_CURL=y BR2_PACKAGE_OPENSSL=y
Then run the test-pkg
script, by telling it what config snippet to use
and what package to test:
$ ./utils/test-pkg -c libcurl.config -p libcurl
By default, test-pkg
will build your package against a subset of the
toolchains used by the autobuilders, which has been selected by the
Buildroot developers as being the most useful and representative
subset. If you want to test all toolchains, pass the -a
option. Note
that in any case, internal toolchains are excluded as they take too
long to build.
The output lists all toolchains that are tested and the corresponding result (excerpt, results are fake):
$ ./utils/test-pkg -c libcurl.config -p libcurl armv5-ctng-linux-gnueabi [ 1/11]: OK armv7-ctng-linux-gnueabihf [ 2/11]: OK br-aarch64-glibc [ 3/11]: SKIPPED br-arcle-hs38 [ 4/11]: SKIPPED br-arm-basic [ 5/11]: FAILED br-arm-cortex-a9-glibc [ 6/11]: OK br-arm-cortex-a9-musl [ 7/11]: FAILED br-arm-cortex-m4-full [ 8/11]: OK br-arm-full [ 9/11]: OK br-arm-full-nothread [10/11]: FAILED br-arm-full-static [11/11]: OK 11 builds, 2 skipped, 2 build failed, 1 legal-info failed
The results mean:
OK
: the build was successful.
SKIPPED
: one or more configuration options listed in the config
snippet were not present in the final configuration. This is due to
options having dependencies not satisfied by the toolchain, such as
for example a package that depends on BR2_USE_MMU
with a noMMU
toolchain. The missing options are reported in missing.config
in
the output build directory (~/br-test-pkg/TOOLCHAIN_NAME/
by
default).
FAILED
: the build failed. Inspect the logfile
file in the output
build directory to see what went wrong:
dirclean
for the package) failed.
When there are failures, you can just re-run the script with the same
options (after you fixed your package); the script will attempt to
re-build the package specified with -p
for all toolchains, without
the need to re-build all the dependencies of that package.
The test-pkg
script accepts a few options, for which you can get some
help by running:
$ ./utils/test-pkg -h
Packages on GitHub often don’t have a download area with release tarballs. However, it is possible to download tarballs directly from the repository on GitHub. As GitHub is known to have changed download mechanisms in the past, the github helper function should be used as shown below.
# Use a tag or a full commit ID FOO_VERSION = 1.0 FOO_SITE = $(call github,<user>,<package>,v$(FOO_VERSION))
Notes
foo-f6fb6654af62045239caed5950bc6c7971965e60.tar.gz
),
so it is not necessary to specify it in the .mk
file.
v
in v1.0
, then the
VERSION
variable should contain just 1.0
, and the v
should be
added directly in the SITE
variable, as illustrated above. This
ensures that the VERSION
variable value can be used to match
against release-monitoring.org
results.
If the package you wish to add does have a release section on GitHub, the maintainer may have uploaded a release tarball, or the release may just point to the automatically generated tarball from the git tag. If there is a release tarball uploaded by the maintainer, we prefer to use that since it may be slightly different (e.g. it contains a configure script so we don’t need to do AUTORECONF).
You can see on the release page if it’s an uploaded tarball or a git tag:
FOO_SITE
, and not use the
github helper.
In a similar way to the github
macro described in
Section 18.25.4, “How to add a package from GitHub”, Buildroot also provides the gitlab
macro
to download from Gitlab repositories. It can be used to download
auto-generated tarballs produced by Gitlab, either for specific tags
or commits:
# Use a tag or a full commit ID FOO_VERSION = 1.0 FOO_SITE = $(call gitlab,<user>,<package>,v$(FOO_VERSION))
By default, it will use a .tar.gz
tarball, but Gitlab also provides
.tar.bz2
tarballs, so by adding a <pkg>_SOURCE
variable, this
.tar.bz2
tarball can be used:
# Use a tag or a full commit ID FOO_VERSION = 1.0 FOO_SITE = $(call gitlab,<user>,<package>,v$(FOO_VERSION)) FOO_SOURCE = foo-$(FOO_VERSION).tar.bz2
If there is a specific tarball uploaded by the upstream developers in
https://gitlab.com/<project>/releases/
, do not use this macro, but
rather use directly the link to the tarball.
As you can see, adding a software package to Buildroot is simply a matter of writing a Makefile using an existing example and modifying it according to the compilation process required by the package.
If you package software that might be useful for other people, don’t forget to send a patch to the Buildroot mailing list (see Section 22.5, “Submitting patches”)!