-
-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a cibuildwheel workflow #6188
Conversation
387dd0f
to
09d7eb9
Compare
@arvidn could you take a look at the linux and mac workflows? They have compile errors but I don't understand why. |
Apparently I think the |
I'm not sure why this shows up in this PR though. |
I think I needed to configure cibuildwheel for this, but didn't. https://cibuildwheel.readthedocs.io/en/stable/cpp_standards/#macos-and-deployment-target-versions Is something similar happening with the Linux error? I'll try to configure a newer |
ad39f21
to
c7b8d23
Compare
No, I have lots of stuff in my repo, and I would like to keep it. I take it this will pull in everything in the libtorrent directory then, whether it's needed or not. |
Yes. On Linux it creates a docker container, copies the whole directory, and builds in isolation. I don't think cibuildwheel has options to limit what gets copied. |
With Docker, normally would use a |
I believe The copy takes less than a second on a fresh checkout (see the PR check), and we don't expect |
8bbae72
to
f1928a8
Compare
I changed the implementation from the Full workflow (all build flavors) here |
paths: | ||
- .github/workflows/cibuildwheel.yml | ||
- tools/cibuildwheel/** | ||
- pyproject.toml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is assuming that anything that would break cibuildwheel would also break the existing builds of the python bindings, right?
Isn't this the only place where the binding is built in an anylinux environment though?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes to the referenced files could break cibuildwheel
without breaking builds of the python bindings.
cibuildwheel
is the easiest (or only) way to use manylinux
/musllinux
environments, yes.
Not sure if this answers what you're asking, but I expect cibuildwheel
to be redundant with the other python build workflows, except when we want the specialized environments for building release wheels.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In theory we could remove the existing python bindings workflows and just use cibuildwheel
with limited flavor selection instead for all PRs, but I think there are a few disadvantages:
- The
cibuildwheel
workflow builds openssl from source on Linux, which takes longer - The
cibuildwheel
workflow wouldn't test against distro versions of openssl, which can be nice
I checked out your branch in a clean repo, but without |
} | ||
|
||
|
||
function do_standard_install { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see this function being used anywhere
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These scripts are vendored from https://github.com/pypa/manylinux/tree/main/docker/build_scripts. I took them as a semi-official reference for how to build up-to-date openssl. (manylinux
's build scripts build openssl as a dependency for other build tools, then delete openssl, because they don't want to be on the hook to deploy updates for heartbleed 2.0).
I thought it would be nice to add a cron workflow to automatically pull these scripts from their repo, but I wanted to keep this PR simple
Now it's just I used the
|
I get this build failure. I'll investigate:
|
interestingly, with this patch it builds:
but cibuildwheel still fails with:
|
f1928a8
to
9945df0
Compare
That The workflows specifically select cpython currently, so it didn't get caught there. It might be easy to get pypy to build and work, but I didn't want to do it in this PR. |
ok, that seems to fix the build failure. I'm still seeing this though:
|
I found that you can see the build log with:
Which reveals that the main libtorrent library is not built, just the python bindings. So it's really no surprise that running the tests fail saying the library isn't found. I suppose it's reasonable to just build the binding, but I don't understand why the Am I misunderstanding something? Is it supposed to also build the main library? |
I see, it's linking the python binding statically against both libtorrent and boost:
clearly the python module that's actually loaded by the test wasn't linked statically |
is the virtual machine torn down and deleted every time? I'm interested in being able to look around where files end up and analyze the build output. Is there a way to do that? |
@arvidn You can use upload artifact action to store output from a job and download it |
@cas-- as far as I understand, it's working in CI. I'm interested in being able to reproduce the build locally. |
I can't think of why the linking wouldn't work. I suspect the test is picking up the wrong build artifact again, somehow. You could use Unfortunately it looks like You could start the docker container manually, and run each step manually in the container. I'm not sure why I haven't been able to reproduce this failure locally. It should be exactly the same. |
🎉 |
FYI: I believe manually-triggered workflows must exist in the default branch ( |
@AllSeeingEyeTolledEweSew I'm finally trying to fully put this into production for the project I was telling you about and me and my collaborators had a few questions I wanted to ask you. Could you please email me at [email protected]? |
This adds a
cibuildwheel
workflow.Features
Will run automatically when a release is published, unconditionally uploads to pypicibuildwheel.yml
ortools/cibuildwheel/**
), we will test by building a representative sample of wheels (the full suite of wheels takes hours)Changes
The python project name is changed frompython-libtorrent
to justlibtorrent
. Closer to common practice in python, plus the name was available on pypiOther changes tosetup.py
, following the principle that--config-mode=distutils
should create a wheel-suitable build by defaultTests
I patched this (and other dependent build system changes) on the
v1.2.14
andv2.0.4
tags. I uploaded the wheels to test-pypi:Release procedures and notes
PYPI_API_TOKEN
)TEST_PYPI_API_TOKEN
Proposed release procedure:
cibuildwheel
workflow without publishing to pypi, to make sure everything buildsOr just create the release, and the workflow will publish automatically.Release procedure caveats
NOTE: pypi uploads are immutable, and filenames cannot be reused even after files are deleted. Publish with care!!
But "libtorrent" at test-pypi is held by someone else, I believe by github user @mayliI have contacted @mayli by email to transfer ownership of the test-pypi projectEDIT: mayli transferred ownership of the project, thanks!Build Flavors
cp37-cp37m-macosx_10_9_x86_64
cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64
cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64
cp37-cp37m-musllinux_1_1_aarch64
cp37-cp37m-musllinux_1_1_x86_64
cp37-cp37m-win32
cp37-cp37m-win_amd64
cp38-cp38-macosx_10_9_x86_64
cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64
cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64
cp38-cp38-musllinux_1_1_aarch64
cp38-cp38-musllinux_1_1_x86_64
cp38-cp38-win32
cp38-cp38-win_amd64
cp39-cp39-macosx_10_9_x86_64
cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64
cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64
cp39-cp39-musllinux_1_1_aarch64
cp39-cp39-musllinux_1_1_x86_64
cp39-cp39-win32
cp39-cp39-win_amd64
cp310-cp310-macosx_10_9_x86_64
cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64
cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64
cp310-cp310-musllinux_1_1_aarch64
cp310-cp310-musllinux_1_1_x86_64
cp310-cp310-win32
cp310-cp310-win_amd64
Missing flavors
Flavors we'd like to build but can't
b2 architecture=arm address-model=64
which should be right, but there are still link errors about mixingx86_64
objects.universal2
b2 architecture=combined address-model=64
does not build this (should pick-arch x86_64 -arch arm64
), even in boost 1.77.0. I triedcxxflags="-arch x86_64 -arch arm64"
, but the compiler says these flags are unused.musllinux
this is new, andEDIT: the latest cibuildwheel supports this!cibuildwheel
doesn't support it yetOther flavors
The build times already take hours, and the artifacts are large due to static linking, and pypi has finite hosting space. We should add flavors as requested, rather than building everything.
Python 3.6 will reach end of life in 2021-12, so I've preemptively removed it from the list to shorten build times. numpy has already done this in their most recent release
Future optimizations
The builds currently take hours on
aarch64
and other emulated architectures. It would be nice to reduce this.abi3
The biggest win would be to use
abi3
which would let us produce one build per platform rather than building for every python version. Howeverabi3
abi3
abi3
Prebuild dependencies
actions/cache
is difficult because the builds run in docker. The right approach is probably to build a custom docker image with prebuilt dependencies. This would especially cut down build times foraarch64
and other emulated architectures.macos
ccache
We build artifacts for a given platform sequentially in a single job, which lets us use
ccache
. On the Linux builds this works well today.On mac,
ccache
reports 0% hit rates between builds for some reason. The only thing that changes is the python version. We should investigate this.windows
ccache
equivalentOn Windows, there is no build caching. It would be nice to have. However the Windows builds already complete in ~40 minutes, so this is a smaller gain compared to others.