Skip to main content

Cross-compilation

Cross-compiling means building a package for a different architecture or a different operating system than the one the build process is running on. It is a common way of obtaining packages for an architecture that conda-forge does not provide any runners for (the other available technique is emulation). Given how abundant x86_64 runners are, most common cross-compilation setups will target non-x86_64 architectures from x86_64 runners.

Terminology

Cross-compilation terminology usually distinguishes between two types of platform:

  • Build: The platform running the building process.
  • Host: The platform we are building packages for.
note

Some cross-compilation documentation might also distinguish between a third type of platform, the target platform. This is used primarily when building cross-compilers, and indicates the platform for which the built package will generate code for. For the purposes of this documentation, we'll consider this to be irrelevant and the target platform to be the same as the host.

Note that some resources may use the term "host" to refer to the build platform, and the term "target" to refer to the host platform. This convention is notably used by cmake, but we will not use this convention in this document.

How to enable cross-compilation

By default, the build scripts only enable building for platforms that feature native conda-forge runners. To enable cross-compilation, you need to extend the build_platform mapping in conda-forge.yml that specifies which build platform to use to cross-compile for a specific platform. For example, to build a linux_aarch64 package on linux_64 host, you would set:

build_platform:
linux_aarch64: linux_64

Then rerender the feedstock. This will generate the appropriate CI workflows and conda-build input metadata. See also test for how to skip the test phase when cross-compiling. Provided the requirements metadata and build scripts are written correctly, the package should just work. However, in some cases, it'll need some adjustments; see examples below for some common cases.

The used platforms are exposed in recipes as selectors and in the build scripts as environment variables. For v1 recipes, the following variables are used:

  • build_platform: The platform on which conda-build is running, corresponding to the build environment that is made available in $BUILD_PREFIX.
  • host_platform: The platform on which the package will be installed, corresponding to the host environment that is made available in $PREFIX. For native builds, matches build_platform.

In v0 recipes, target_platform is used in place of host_platform.

note

Many existing v1 recipes are using target_platform instead of host_platform. This works because target platform is almost always the same as host platform, though it is technically incorrect.

In addition to these two variables, there are some more environment variables that are set by conda-forge's automation (e.g. conda-forge-ci-setup, compiler activation packages, etc) that can aid in cross-compilation setups:

  • CONDA_BUILD_CROSS_COMPILATION: set to 1 when the build platform and the host platform differ.
  • CONDA_TOOLCHAIN_BUILD: the autoconf triplet expected for build platform.
  • CONDA_TOOLCHAIN_HOST: the autoconf triplet expected for host platform.
  • CMAKE_ARGS: arguments needed to cross-compile with CMake. Pass it to cmake in your build script.
  • MESON_ARGS: arguments needed to cross-compile with Meson. Pass it to meson in your build script. Note a cross build definition file is automatically created for you too.
  • CC_FOR_BUILD: a C compiler targeting the build platform.
  • CXX_FOR_BUILD: a C++ compiler targeting the build platform.
  • CROSSCOMPILING_EMULATOR: Path to the qemu binary for the host platform. Useful for running tests when cross-compiling.

This is all supported by two main conda-build features introduced in version 3:

Placing requirements in build or host

The rule of the thumb is:

  • If it needs to run during the build, it goes in build.
  • If it needs to be available during the build, is specific to the host machine, but is not run, it goes in host. (for example, headers, libraries, etc.)
  • If both conditions are true, it belongs in both.
note

Conda builds are using the ${BUILD_PREFIX} / ${PREFIX} split even when not cross-compiling, therefore splitting the dependencies correctly is always necessary. However, the non cross-compilation cases are generally more tolerant of errors, such as running binaries from ${PREFIX} or building against libraries in ${BUILD_PREFIX}.

In some cases, additional packages may be needed only when cross-compiling. To cover that, you can use an appropriate selectors to cover for the build platform and the host platform being different. These are:

  • for v0 recipes, [build_platform != target_platform].
  • for v1 recipes, if: build_platform != host_platform.

However, there are some cases requiring special handling; most notably Python cross-compilation.

Cross-compilation examples

A package needs to make a few changes in their recipe to be compatible with cross-compilation. Here are a few examples.

Autotools

A simple C library using autotools for cross-compilation might look like this:

requirements:
build:
- ${{ compiler("c") }}
- ${{ stdlib("c") }}
- make
- pkg-config
- gnuconfig
host:
- libogg

In the build script, it would need to update the config files and guard any tests when cross-compiling:

# Get an updated config.sub and config.guess
cp $BUILD_PREFIX/share/gnuconfig/config.* .

./configure
make -j${CPU_COUNT}

# Skip ``make check`` when cross-compiling
if [[ "${CONDA_BUILD_CROSS_COMPILATION:-}" != "1" || "${CROSSCOMPILING_EMULATOR:-}" != "" ]]; then
make check -j${CPU_COUNT}
fi

CMake

A simple C++ library using CMake for cross-compilation might look like this:

requirements:
build:
- ${{ compiler("cxx") }}
- ${{ stdlib("c") }}
- cmake
- ninja
host:
- libboost-devel

In the build script, it would need to update cmake call and guard any tests when cross-compiling:

# Pass ``CMAKE_ARGS`` to ``cmake``
cmake ${CMAKE_ARGS} -G Ninja ..
cmake --build .

# Skip ``ctest`` when cross-compiling
if [[ "${CONDA_BUILD_CROSS_COMPILATION:-}" != "1" || "${CROSSCOMPILING_EMULATOR:-}" != "" ]]; then
ctest
fi

Meson

Similarly, with Meson, the meta.yaml needs:

requirements:
build:
- ${{ compiler("c") }}
- ${{ compiler("cxx") }}
- ${{ stdlib("c") }}
- meson
- pkg-config
host:
- libogg

And this in build.sh:

# Pass ``MESON_ARGS`` to ``meson``
meson setup ${MESON_ARGS} ..
meson compile

Python

A simple Python extension using Cython and NumPy's C API would look like so:

requirements:
build:
- ${{ compiler("c") }}
- ${{ stdlib("c") }}
- if: build_platform != host_platform
then:
- cross-python_${{ host_platform }}
- python
- cython
- numpy
host:
- python
- pip
- cython
- numpy
run:
- python

This example is discussed in greater detail in details about cross-compiled Python packages. For more details about NumPy see Building against NumPy.

MPI

With MPI, openmpi is required for the build platform as the compiler wrappers are binaries, but mpich is not required as the compiler wrappers are scripts (see example):

requirements:
build:
- if: build_platform != host_platform and mpi == "openmpi"
then: ${{ mpi }}
host:
- ${{ mpi }}
run:
- ${{ mpi }}

In the build script, openmpi compiler wrappers can use host libraries by setting the environmental variable OPAL_PREFIX to $PREFIX.

if [[ "$CONDA_BUILD_CROSS_COMPILATION" == "1" && "${mpi}" == "openmpi" ]]; then
export OPAL_PREFIX="$PREFIX"
fi

Other examples

There are more variations of this approach in the wild. So this is not meant to be exhaustive, but merely to provide a starting point with some guidelines. Please look at other recipes for more examples.

Finding NumPy in cross-compiled Python packages using CMake

If you are building a Python extension via CMake with NumPy and you want it to work in cross-compilation, you need to prepend to the CMake invocation in your build script the following lines:

Python_INCLUDE_DIR="$(python -c 'import sysconfig; print(sysconfig.get_path("include"))')"
Python_NumPy_INCLUDE_DIR="$(python -c 'import numpy; print(numpy.get_include())')"
# usually either Python_* or Python3_* lines are sufficient
CMAKE_ARGS+=" -DPython_EXECUTABLE:PATH=${PYTHON}"
CMAKE_ARGS+=" -DPython_INCLUDE_DIR:PATH=${Python_INCLUDE_DIR}"
CMAKE_ARGS+=" -DPython_NumPy_INCLUDE_DIR=${Python_NumPy_INCLUDE_DIR}"
CMAKE_ARGS+=" -DPython3_EXECUTABLE:PATH=${PYTHON}"
CMAKE_ARGS+=" -DPython3_INCLUDE_DIR:PATH=${Python_INCLUDE_DIR}"
CMAKE_ARGS+=" -DPython3_NumPy_INCLUDE_DIR=${Python_NumPy_INCLUDE_DIR}"

Details about cross-compiled Python packages

Cross-compiling Python packages is a bit more involved than other packages. The main pain point is that we need an executable Python interpreter (i.e. python in build) that knows how to provide accurate information about the target platform. Since this is not officially supported, a series of workarounds are required to make it work.

In practical terms, it means that in conda-forge you need to:

  1. Add cross-python_${{ host_platform }} (or cross-python_{{ target_platform }} for v0 recipes) to build requirements, conditionally to the cross-compiling selector.
  2. Copy python itself and non-pure Python packages (i.e. these that ship compiled extensions) that need to be present while the package is being built, such as cython and numpy, from host to build requirements, conditionally to the cross-compiling selector.

This is demonstrated in the Python example.

note

Since Python historically did not support cross-compilation, it always needs to be present in host requirements, even though it is technically run during the build process.