-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Dockerfile to build a buildroot-based cross-compile environment for vzlogger #486
base: master
Are you sure you want to change the base?
Conversation
24d9026
to
6e70998
Compare
abbreviated example run, to give readers an idea of what this does:
|
d2b0fcf
to
8e23794
Compare
using builder-pattern and some hacks, i managed to optimize the image down to:
the individual layers are: |
8e23794
to
f471ca4
Compare
now with libsml and also cross-compiles the tests |
f471ca4
to
6a6e2bd
Compare
apt-get -y --force-yes upgrade ; \ | ||
apt-get -y --force-yes install \ | ||
# https://buildroot.org/downloads/manual/manual.html#requirement | ||
make gcc g++ \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it better to use the gcc base image (https://hub.docker.com/_/gcc) and install less packages?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's what my comment above says,
i only had not done any research yet,
thanks for the suggestion of the 'gcc' image(s).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the gcc image itself does not make any sense to use,
because it's based on an image that already contains gcc 🤣,
which is then used to build gcc from source.
(and it does not even use a multi-stage build to remove the initial gcc.)
https://github.com/docker-library/gcc/blob/master/10/Dockerfile
but we might use the image that the gcc image is built on:
https://hub.docker.com/_/buildpack-deps/
https://github.com/docker-library/buildpack-deps/blob/master/debian/buster/Dockerfile
https://github.com/docker-library/buildpack-deps/blob/master/debian/buster/scm/Dockerfile
it has most of what we need. still not all, and also lots of stuff we don't need...
but unlikely to get a perfect match.
is there a best practice for installings deps at build-time vs. using pre-built images?
there is an existing official docker build that uses buildroot to build uLibc using buildroot,
similar to what i do here,
but is doesn't look seem to be something we can re-use:
https://github.com/docker-library/busybox/blob/master/stable/uclibc/Dockerfile.builder
# so we can see their size in a separate layer | ||
RUN set -xe ; \ | ||
apt-get update ; \ | ||
apt-get -y --force-yes upgrade ; \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The debian image is build often and released with the latest version of the packages. See https://hub.docker.com/_/debian?tab=tags&page=1&ordering=last_updated. Can we improve build speed by removing this step and ensure that a pull of the debian image is done before the buil
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a tag that simply always points to the latest pre-updated version?
(it's strange that 'latest' does not. i remember 'latest' being regularly updated at least for unstable.)
but also see above.
the only real issue is that if we need to install packages, that might pull in any amount of partially related updates.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should be able to run docker build --pull
. This should ensure that the build is using the latest available image under this tag.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i missed that--pull
is needed use the latest image release.
should save some time by avoiding updates.
is there a best practice for this?
i.e., can we rely on the latest image being up-to-date, or should we run the update in any case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it runs on the build server, it is always the latest version at the build time. (Sorry for the delay, I was some days off)
# only for showing the type of the binary below | ||
file \ | ||
; \ | ||
apt-get purge ; \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The purge is something that is not in the docker best practices (https://docs.docker.com/develop/develop-images/dockerfile_best-practices/). Does it provide a reduce of the image size?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this deletes the downloaded packages,
recent images are configured to do this automatically,
it does not do any harm if done redundantly.
set -xe ; \ | ||
\ | ||
# download and unpack buildroot | ||
wget --progress=dot:mega https://buildroot.org/downloads/buildroot-2021.02.1.tar.bz2 ; \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the buildroot version should be an ARG, so that testing a other version is easy
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we could even pull buildroot from git.
i did not make this a parameter yet because i am unsure how much of the trickery below will break on updates anyway and need fixing.
i don't assume this would simply work.
it turns out that buildroot can be globally configured to compile everything as static binaries using:
but this fails for both libsml and vzlogger for various reasons. also it turns out that linking vzlogger with uclibc is kinda futile, because libstdc++ links glibc anyway. |
giving up on static linking for now,
(i have not actually tested running the resulting image.) |
as static building is not working and we need the libraries later. there is redundant data in output/host/sysroot, but nontrivial to use...
… not the one we ran the test-build in
8387082
to
42c945d
Compare
Do you want this one or #563? You should decide on either one. |
@maxberger: the alpine (or debian)-based images are "simple" traditional docker images built in the "usual way" from an OS image, and easy to handle for anybody, so i think we still want to offer this. while THIS has a dockerfile that generates a cross-compile environment. which is WAY more complex and initially slower than the traditional approach, but which we can store for re-use (see my test-image on dockerhub), you might read this article on the approach: my personal main target is to be able to efficiently build and run the unit-tests for multiple architectures, see the discussion above. |
Ok, so this one is for running tests, whereas the alpine one would be for distribution. Makes sense, thanks for the explanation! |
i'd rather say, this is for efficient cross-compilation, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I build the image on my machine. That quite a while. So we need to build a a lot of arm images with one instance to save some compute time.
I also think to maintain this, is a task that takes some effort. In the modern docker with multi platform support is good. It may need some compute time to build, but the tooling is simple. In my opinion to balance this, I would go with run the test with docker qemu.
echo '#!/bin/bash' ;\ | ||
echo 'f=/tmp/br_make_wrapper.$$' ; \ | ||
echo 'trap "rm -f $f" EXIT' ; \ | ||
echo 'echo "=== make $@ ===" >&2' ; \ | ||
echo 'make "$@" &> >(tee "$f" | grep --line-buffered ">>>") || {' ; \ | ||
echo ' e=$?' ; \ | ||
echo ' cat "$f" >&2' ; \ | ||
echo ' exit $e' ; \ | ||
echo '}' ; \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would easier to maintain if this file would be a normal file and copied with ADD
into the image?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i authored this particular script in 2020, and it never needed an any change since.
other than that, i inlined all the files in the dockerfile to avoid cluttering the repo with single-purpose files.
but i guess we should create a subdirectory for this anyway, and could then split them out.
the idea is that we can build this once, store it somewhere (like on dockerhub now) and use it almost idefinitely.
yes, but cross-compiling is complex in most cases.
i don't understand if you are arguing for or against here. even if the build of the buildroot image takes time, that only needs to be done once (see above), execution of the tests, once built, would have to happen in qemu either way. |
previoys discussion: |
Okay, I will try to run some docker builds of this and of the alpine Image in the GitHub Agent in the next week. So we get some current durations. I am totaly happy, If we get an community managed alpine based Dockerfile in the master branch. |
Not sure if I am missing the point, but why don't you just use pbuilder? You need a base system which is created with
Then you run debuild to create a dsc file.
This works, I did however not yet test the resulting package. |
The above is OK for doing a build, but not for developing. Is that it? |
@narc-Ontakac2: see the discussion in #478 , |
For running the tests pbuilder is oversized. It starts with a debian base system, installs all dependencies and the runs debuild (which will also run the tests). The main purpose is to check for forgotten build dependencies. It can run a qemu vm to run builds on foreign platforms. On of the things I want to do is to run these builds (amd64, armhf, arm64) as a release action and publish the packages to a repository. |
as said in #478 , for occasional building of a releases, using qemu is probably ok. |
I run some builds of the alpine docker image I took about 30 min for the majority of the runs to build an docker image for linux/arm/v6,linux/arm64 and linux/amd64 including tests for all platforms. One took 48 mins. I did not find time to run a docker build for this image on github actions. |
Building the buildroot image took 37 min on a github agent: https://github.com/StefanSchoof/vzlogger/actions/workflows/build_buildroot.yml |
@StefanSchoof: i'll look into updating the buildx action to actually build the requested branch (it's hardcoded to build master atm), |
see #478/comment
TODO:
add libsml✔️the critical part is buildroot/output, which is ~300mb, it should be possible to get rid of everything else.