Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

roll-out of alma8-based infrastructure (aka centos 8) #1941

Open
7 of 12 tasks
beckermr opened this issue Apr 19, 2023 · 51 comments
Open
7 of 12 tasks

roll-out of alma8-based infrastructure (aka centos 8) #1941

beckermr opened this issue Apr 19, 2023 · 51 comments

Comments

@beckermr
Copy link
Member

beckermr commented Apr 19, 2023

This issue tracks centos 8 implementation PRs

The alma 8 repo is https://repo.almalinux.org/almalinux/8.7/BaseOS/x86_64/os/Packages/ etc.

@jaimergp
Copy link
Member

@jaimergp
Copy link
Member

What I understood was:

  • We will build the new sysroots from Alma 8
  • Alma 8 happens to use Glibc 2.28
  • manylinux_2_28 is also derived from Alma 8, so that's the match?
  • Anaconda versions their sysroots with the glibc version, and we do too?
  • The cosN (CentOs version N) prefixes we had in some places will be replaced by conda_X_YZ (with X.YZ being the glibc version)

And maybe I am missing something else, but we can probably merge the discussion with the CentOS 8 thread mentioned in my comment above.

@beckermr
Copy link
Member Author

Someone, I think @isuruf, mentioned listing versions for other things. libm maybe?

@h-vetinari
Copy link
Member

h-vetinari commented Apr 22, 2023

The cosN (CentOs version N) prefixes we had in some places will be replaced by conda_X_YZ (with X.YZ being the glibc version)

Manylinux uses glibc major.minor for its versioning, and I think it's good to match that (I think in the core call the mood was that conda_2_28 would be the best option).

That said, not least after the ill-fated Debian-based manylinux_2_24, in effect it's been just a way to encode the RHEL major version:

manylinux glibc RHEL
manylinux1 2.5 5
manylinux2010 2.12 6
manylinux2014 2.17 7
manylinux_2_28 2.28 8

That's because manylinux cannot bring its own updated compiler stack along and so is dependent on having the devtoolset backports. Perhaps keeping RHEL X as a base (through one of its ABI-compatible derivatives like Alma,Rocky,UBI) solves the other versioning questions, even if we call it conda_2_28?

@beckermr
Copy link
Member Author

Yes. We plan to keep alma8 as the base.

@beckermr beckermr changed the title formal definition of conda_2_28 implementation of conda_2_28 (aka centos 8) Apr 23, 2023
@beckermr beckermr pinned this issue Apr 23, 2023
@isuruf
Copy link
Member

isuruf commented Apr 23, 2023

One plus about just using conda instead of conda_2_28, is that the user does not have to deal with cdt_name in the recipes. They can just use the versioning of the sysroot and the cdt_name will become obsolete.

@beckermr
Copy link
Member Author

We'll need to tuck the cdt name in the CDTs somewhere maybe? Idk if the CDT package names will conflict or not.

@beckermr
Copy link
Member Author

Can you send an example recipe where people are dealing directly with cdt_name? I thought the jinja2 function took care of that.

@isuruf
Copy link
Member

isuruf commented Apr 23, 2023

@isuruf
Copy link
Member

isuruf commented Apr 23, 2023

On the other hand, that line in conda-forge.yml serves two purposes. Setting cdt_name and the docker image name.

@beckermr
Copy link
Member Author

Hmmmm. Being able to at least match cdts to the os for our docker containers is inherently useful possibly? This would argue for using conda_2_28 in both places.

@isuruf
Copy link
Member

isuruf commented Apr 23, 2023

Being able to at least match cdts to the os for our docker containers is inherently useful possibly?

That's not really needed. We use cos6 CDTs in cos7 docker images.

@beckermr
Copy link
Member Author

I'm not saying we alway have to have them matched. I'm saying that using the same notation in both places is helpful.

@jakirkham
Copy link
Member

Would it help to start a PR for Docker images? Or are we not ready for that yet?

@beckermr
Copy link
Member Author

Go for it but I don't expect it to be merged anytime soon.

@jakirkham
Copy link
Member

Gotcha, what do we see as the required steps before they are merged? Asking since they wouldn't be integrated anywhere by just publishing the image. Or is there something else I'm missing?

@beckermr
Copy link
Member Author

I am not sure what goes in them. If we need the sysroots to put in them, then we'd need that. If they don't have anything special, then we can just build it.

@jakirkham
Copy link
Member

Gotcha

Don't think the sysroot is needed

The images cache the compiler packages as a convenience, but that can be disabled temporarily or it can use older compilers for now. Not too worried about this

Can't think of anything else of concern

If something comes up when we start working on them, we can always discuss

@jakirkham
Copy link
Member

Started adding an image in PR ( conda-forge/docker-images#235 )

@jakirkham
Copy link
Member

jakirkham commented May 31, 2023

From Matt, we need to update os_version for conda_2_28. Also there is a corresponding way to do this in staged-recipes

xref: conda-forge/conda-smithy#1434

@h-vetinari
Copy link
Member

adjust smithy to allow for easy alma8 config

What's the intention here? Being able to switch the image? (I'm asking because I'd be willing to give it a shot...)

FWIW, the current 2.17 setup doesn't need any smithy interaction, it's enough to just add sysroot_linux-64 2.17 # [linux64] to the build dependencies. AFAIR that's because we switched the images to cos7, while still using the cos6 sysroot by default. This setup was the result of a bunch of discussions around resolver errors and other issues with older images if there are packages built against the newer sysroot (for details see this summary of the very clever setup proposed by Isuru);

Is there a reason we couldn't switch the images to alma8, but keep the sysroot at cos6 (with opt-in upgrade to cos7 & alma8)?

@beckermr
Copy link
Member Author

There is no reason we couldn't bump images. This may break builds using yum requirements if the packages have changed names or conventions upstream.

@h-vetinari
Copy link
Member

This may break builds using yum requirements if the packages have changed names or conventions upstream.

According to this search, there's around 170 recipes that use yum_requirements.txt (for the most part, it's mesa, x11, etc.). I guess it would be possible to audit those for any name-changes, however, given that Alma 8 intends to be bug-for-bug compatible with RHEL 8, I strongly doubt that packages would changes names TBH.

@beckermr
Copy link
Member Author

Fair point. I'm happy with simply bumping the default image.

@h-vetinari
Copy link
Member

h-vetinari commented Jun 16, 2023

There seems to be something going awry with the new repodata hack. I'm getting Alma 8 kernel headers together with the COS 7 sysroot on aarch:

The following NEW packages will be INSTALLED [selection]:

    _sysroot_linux-aarch64_curr_repodata_hack: 4-h57d6b7b_13             conda-forge
    kernel-headers_linux-aarch64:              4.18.0-h5b4a56d_13        conda-forge  # <- kernel version in RHEL 8 !!
    sysroot_linux-aarch64:                     2.17-h5b4a56d_13          conda-forge  # <- glibc version in RHEL 7

Interestingly, this is not happening on PPC, where I get:

The following NEW packages will be INSTALLED [selection]:

    _sysroot_linux-ppc64le_curr_repodata_hack: 4-h43410cf_13             conda-forge
    kernel-headers_linux-ppc64le:              3.10.0-h23d7e6c_13        conda-forge  # <- kernel version in RHEL 7
    sysroot_linux-ppc64le:                     2.17-h23d7e6c_13          conda-forge  # <- glibc version in RHEL 7

@beckermr
Copy link
Member Author

What makes you think it is the repodata hack?

@h-vetinari
Copy link
Member

What makes you think it is the repodata hack?

It must be related to the sysroot, where the kernel-headers are built, and I couldn't see a difference between aarch/ppc in conda-forge/linux-sysroot-feedstock#46. Both variants are pulling in crdh 4, but looking a bit closer, that divergence between aarch & ppc goes back much further. Seems we've been using newer kernel headers on aarch since conda-forge/linux-sysroot-feedstock#15 (corresponding to linux version in RHEL 8, but apparently still being downloaded through CentOS 7 repos1). Seems it's not critical.

Footnotes

  1. though that PR doesn't document the rationale, so I can't really say why it diverged at all

@jakirkham
Copy link
Member

This is worth a read

https://almalinux.org/blog/impact-of-rhel-changes/

@jakirkham
Copy link
Member

IIRC in our last conda-forge meeting it sounded like we are ok going ahead with AlmaLinux 8. Did I understand correctly?

If so, it sounds like it comes down to doing these next steps (particularly upgrading CDTs). Does that sound right?

@beckermr
Copy link
Member Author

Yes but see the list above. We are trying to get rid of as many CDTs as we can.

@h-vetinari
Copy link
Member

We have merged conda-forge/docker-images#242 recently. What are some next steps here? Are we ready to tackle the first CDTs (modulo conda-forge/cdt-builds#66)?

@beckermr
Copy link
Member Author

We agreed to get rid of as many CDTs as we could. So that's the next step. To go through them and figure out what we can build ourselves.

@jakirkham
Copy link
Member

Maybe an earlier step is just getting a list of CDTs we use so we can search conda-forge for fuzzy matches

@jaimergp
Copy link
Member

xref conda-forge/cdt-builds#66

@h-vetinari h-vetinari changed the title implementation of conda_2_28 (aka centos 8) roll-out of alma8-based infrastructure (aka centos 8) Oct 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

5 participants