-
Notifications
You must be signed in to change notification settings - Fork 171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bump to GCC 14.2 #5602
Bump to GCC 14.2 #5602
Conversation
Currently failing due to old json-c. We need to bump json-c, but the new version needs cmake instead of autoconf, so the recipe must be adapted. |
Now fails in
I filed a bug report here: https://its.cern.ch/jira/browse/ALF-83 . Also, as discussed with @ktf : binutils compilation fails randomly. Probably we should downgrade to the binutils of gcc-toolchian-13.2-alice1, which was working. |
@singiamtel @ktf : the slc9-aarch CI fails with
|
@davidrohr so, for the alien.py errors in alidist-slc9-aarch64 i would need the log file to see what happened .. weird is that test 004 and 006, both cp related, worked, so, if the actual log file is not available to debug what happened (on x86_64 Alma9 seems to work without problems) then just restart the test. |
Well, I don't know how to get a log file beyond the build log I get from the CI. |
so for xjalienfs/alien.py these tests are run https://github.com/adriansev/jalien_py/tree/master/tests |
@davidrohr AliRoot is now fine. |
@davidrohr do you understand the issue with CUDA and the one with xmmintr? They both seem legit, and I do not understand why we did not see them with GCC 13. |
For CUDA it is clear, since GCC14 is not yet supported. We have to wait for a new CUDA. |
@ktf : Now only the FullCI remains red, will stay like this until we bump CUDA. |
Changed to WIP to avoid retesting. |
Changed back to WIP. @davidrohr the fullCI issue I assume is due to the use of the old container for slc8. The issue with ransBenchmark also looks like real. |
@ktf :
So we have to wait for EPNs to bump, then we can merge this. |
This is ready to go now, if it passes the CIs except for the old FullCI, which can be removed now. |
@ktf @singiamtel : The generators CI fails with
Is that known? |
@ktf : FullCI9 and slc9 are green. Not sure about the Generators CI? The old FullCI we can ignore. As I wrote, you can disable them know. With merging, let's please wait until FLPs create their next FLPSuite. I'd aim for merging Friday evening. |
I toggled FullCI as no longer required, and will delete it soon. Not sure where the Aligenerators error is coming from, is GNU Gengetopt a dependency on our builder? And if so, how was it working before? [root@dev ~]# docker run --rm -it registry.cern.ch/alisw/slc7-builder:latest bash
[root@0c7649ddaf33 /]# gengetopt
bash: gengetopt: command not found |
I would say we merge on monday, then. There might be cleanups to be done on the CVMFS side which I do not want to do over the weekend. AliGenerators seems fine now. |
@ktf: FLPSuite is tagged, so please go ahead and merge when you want |
The way I would do it is:
Given there is still Quark Matter stuff going on, I do not want to end up in front of the firing squad for changing the compiler without announcing it. |
I couldn't test it yet because the build is broken after #5785 |
I could generate the RPMs and validate them |
@singiamtel can we cache this PR and then merge it? Thanks. |
@ktf @singiamtel : I would recommend we do this together with bumping CMake: #5792 |
Should I merge the CMake PR into this one? So the hashes on the cache run match |
Fine with me |
Cache run ongoing @ https://alijenkins.cern.ch/job/CacheO2Package/113/ |
I am not sure bumping CMake will actually trigger a rebuild, because it's a build requires. That said, fine with me as well. |
Allow users to use a virtualenv as own python Notice that it will still try to install our own provided packages so that we have a minimum working environment. Specifically clone repository in system override
As discussed, I merged both #5661 and #5792 in this PR, and rebased so we have the right commits for each. New cache run is ongoing in https://alijenkins.cern.ch/job/CacheO2Package/114/ |
The cache build is done. This tests should be running against exactly the same code as before, I think it should be fine to merge |
Will at least fail with current CUDA 12.6, but want to check for other failures.