Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

root filesystem label is container_file_t when it should be root_t #149

Closed
cgwalters opened this issue Jan 22, 2024 · 9 comments · Fixed by #197
Closed

root filesystem label is container_file_t when it should be root_t #149

cgwalters opened this issue Jan 22, 2024 · 9 comments · Fixed by #197

Comments

@cgwalters
Copy link
Contributor

Migrating this back from CentOS/centos-bootc#184

Because we want to be able to add a proper Closes in this repository.


One thing I notice here...and I'm not yet certain if it's a bib regression or not, but looking at the disk image before it's booted:

$ guestfish --ro -a disk.qcow2                                                                                                                                                                                                                   
><fs> run
list-filesystems
><fs> list-filesystems
/dev/sda1: unknown
/dev/sda2: vfat
/dev/sda3: ext4
/dev/sda4: ext4
><fs> mount /dev/sda4 /
><fs> getxattrs /
[0] = {
  attrname: security.selinux
  attrval: system_u:object_r:container_file_t:s0\x00
}
><fs> 

That's just really broken, we shouldn't end up with a physical disk image root labeled container_file_t! It looks like actually all of the labels up to the deployment root are similarly broken (they should be something like root_t or usr_t).

However once we get to the deployment things are fine:

><fs> getxattrs /ostree/deploy/default/deploy/3ef1290eacdb05e50127ed5a920e264f228dae248addb10d98224a2e04918c2c.0/etc/fstab
[0] = {
  attrname: security.selinux
  attrval: system_u:object_r:etc_t:s0\x00
}
><fs> getxattrs /ostree/deploy/default/deploy/3ef1290eacdb05e50127ed5a920e264f228dae248addb10d98224a2e04918c2c.0/etc/passwd 
[0] = {
  attrname: security.selinux
  attrval: system_u:object_r:passwd_file_t:s0\x00
}
><fs> 

And it's specifically that /ostree/deploy/default/backing is also container_file_t, and the overlayfs picks up that context and that breaks everything.

@cgwalters
Copy link
Contributor Author

BTW I don't have much experience with osbuild debugging...if someone can add a brief crash course into the README.md here on e.g. "how can I set a breakpoint right after org.osbuild.selinux runs that'd be awesome. I read through https://www.osbuild.org/guides/developer-guide/index.html but didn't find anything like that.

Edit, ok osbuild --help shows a likely --break argument, but when I do e.g. osbuild --break org.osbuild.selinux /tmp/manifest-qcow2.json gives me...

source/org.osbuild.skopeo-index (org.osbuild.skopeo-index): Getting image list signatures
source/org.osbuild.skopeo-index (org.osbuild.skopeo-index): Copying 0 images generated from 4 images in list
source/org.osbuild.skopeo-index (org.osbuild.skopeo-index): Writing manifest list to image destination
source/org.osbuild.skopeo-index (org.osbuild.skopeo-index): Storing list signatures
build:    	d01a64a05a404fffed27a6182ca253009fa48b5b1c79cd1729e46fb032165855
ostree-deployment:	a45beec52e20b9ca354df13344e59e7d13b75eeaa17940ee3d4abb39d2d607a0
image:    	c90a3366e8c5638900e07decfcd238d42a1c02b029ff685d9410b87cd3fdc109
qcow2:    	b8b41e5c3f1581628872f9541e66f319b607bdbe1d24f6859c5b57fc221dbb74

but...I don't know what that means; as I understand it I should be able to find filesystem trees from those hashes I think, but I don't see them under /.osbuild. (How could there be a hash for the qcow2 if I did --break?)

What I think is probably going on is that we need to do something a bit like this:
https://github.com/coreos/coreos-assembler/blob/90780f70cd4cd07ee84400c5e212c4060bbcd6c6/src/create_disk.sh#L295
Or I guess, since osbuild does that thing where it creates a temporary dir and then copies it all to the disk, we'd need to init the label for the root the same way. I don't quite understand how this works in the outside-of-container osbuild flow unless by accident?

@supakeen
Copy link
Member

Something like rm -rf .osbuild and then a osbuild --break org.osbuild.selinux --export qcow2 --output-directory=/some/path manifest.json will drop you into /bin/bash for each org.osbuild.selinux stage encountered (there will likely be two and you'd want the second one).

I'll do some docs stuff about this today but the gist is that you need to have an export as well; and I think to not have the pipeline already in cache. I'm contemplating changing osbuild so that when you pass --break it will ignore caches and always export everything.

@mvo5
Copy link
Collaborator

mvo5 commented Jan 23, 2024

Thank you! I played around with this a bit and I think it's a bit random what labels we end up with on the disk before the first boot.
When running the osbuild pipeline on a test Debian machine without selinux and run guestfish I see no labels at all for example.

The pipeline we generate in osbuild runs org.osbuild.ostree.container.deploy to install the files from the container into the deployment root. This stage is just a thin wrapper around ostree container image deploy (took me a bit to find this in ostree-rs-ext) . And then org.osbuild.ostree.selinux which appears to be only labeling /boot (on /dev/sda2), /etc, /var (the last two are not on disk though). Pardon my ignorance here, assuming my above understanding is correct, what component should do the labeling of the sysroot/deployment root when we build the image (leaving it to firstboot as we did before seems to now trigger boot failures as recently observed in CentOS/centos-bootc#184).

Following the code I see that there is a codepath from deploy.rs:deploy() that calls sysroot.deploy_tree_with_options to sysroot_finalize_deployment() which AIUI should do some labeling of the deployment root. So I'm clearly missing something.

Will moving to bootc install to-filesystem fix this issue - it seems both use the undelying ostree-rs-ext/lib/src/container/deploy.rs:deploy()` helper so maybe not?

@rhatdan
Copy link
Contributor

rhatdan commented Jan 23, 2024

The labeling should be from /.

@cgwalters
Copy link
Contributor Author

CentOS/centos-bootc#186 will undo the change that triggered this, but we still definitely need to fix the disk image labels.

mvo5 added a commit to mvo5/bootc-image-builder that referenced this issue Jan 23, 2024
Because of the issues with the latest
CentOS/centos-bootc#184 and
osbuild#149

with the latest quay.io/centos-bootc/fedora-bootc:eln this commit
moves to the last known good container id.
mvo5 added a commit to mvo5/bootc-image-builder that referenced this issue Jan 23, 2024
Because of the issues with the latest
CentOS/centos-bootc#184 and
osbuild#149

with the latest quay.io/centos-bootc/fedora-bootc:eln this commit
moves to the last known good container id.
github-merge-queue bot pushed a commit that referenced this issue Jan 23, 2024
Because of the issues with the latest
CentOS/centos-bootc#184 and
#149

with the latest quay.io/centos-bootc/fedora-bootc:eln this commit
moves to the last known good container id.
@cgwalters
Copy link
Contributor Author

When running the osbuild pipeline on a test Debian machine without selinux and run guestfish I see no labels at all for example.

We should just hard error out when trying to write a SELinux-enabled target system from a non-SELinux host. It's possible but it's horrendously complex; today bootc install has explicit logic to disallow this...which we'd be using...if we were using bootc install...

@cgwalters
Copy link
Contributor Author

The pipeline we generate in osbuild runs org.osbuild.ostree.container.deploy to install the files from the container into the deployment root.

The problem is around the filesystem root. ostree is just a super fancy wrapper around chroot and hardlinks. We need to ensure that what ostree calls the "physical /" aka the thing you see when you just mount the filesystem externally is labeled as root_t. The coreos-assembler code linked above has some explicit logic to do that. bootc...hmm, doesn't and it seems to work, which I think is actually a consequence of the fact that bootc always writes to one of:

  • A newly initialized root filesystem, where default labeling rules in the kernel take effect
  • An externally provisioned filesystem, where we assume this was done correctly

osbuild however makes a temporary root and then copies it, and I believe that's the source of the problem.

(I still don't understand why we don't default to writing directly to the target root filesystem and only offer "construct temporary tree" as a debugging aid, because doing it directly is WAY more efficient)

cgwalters added a commit to cgwalters/bootc that referenced this issue Jan 29, 2024
This came out of a discussion with bootc-image-builder, which
has this issue right now:
osbuild/bootc-image-builder#149

As I noted in that issue, I think it's basically been working
here because we always write to a real fresh filesystem, but
let's be very explicit.

There's a notable tricky bootstrapping we're solving here
around "what's the label of `/`" because we know we are running
the target OS as a container image already.

Signed-off-by: Colin Walters <[email protected]>
@cgwalters
Copy link
Contributor Author

So I did containers/bootc#290 but this won't help bib until it uses bootc install to-filesystem.

This...might help when run in the osbuild pipeline, or it might not. I'm not yet sure until we can look at the details of how bib calls bootc here, and how some of the files are laid out which I am not yet sure of.

I think the most relevant thing going on here is the VOLUME /store gets us a temporary xfs filesystem tree on the host, but labeleled container_file_t (because normal selinux rules would block anything else). But we should be able to happily make that root_t (and anything else we need). I do still think it'd make sense to write to the real target filesystem by default, but if the above plus using bootc install to-filesystem doesn't fix it, then we can just probably hack this with a chcon -t root_t /store.

cgwalters added a commit to cgwalters/bootc that referenced this issue Jan 29, 2024
This came out of a discussion with bootc-image-builder, which
has this issue right now:
osbuild/bootc-image-builder#149

As I noted in that issue, I think it's basically been working
here because we always write to a real fresh filesystem, but
let's be very explicit.

There's a notable tricky bootstrapping we're solving here
around "what's the label of `/`" because we know we are running
the target OS as a container image already.

Signed-off-by: Colin Walters <[email protected]>
cgwalters added a commit to cgwalters/bootc that referenced this issue Jan 29, 2024
This came out of a discussion with bootc-image-builder, which
has this issue right now:
osbuild/bootc-image-builder#149

As I noted in that issue, I think it's basically been working
here because we always write to a real fresh filesystem, but
let's be very explicit.

There's a notable tricky bootstrapping we're solving here
around "what's the label of `/`" because we know we are running
the target OS as a container image already.

Signed-off-by: Colin Walters <[email protected]>
cgwalters added a commit to cgwalters/bootc-image-builder that referenced this issue Feb 13, 2024
The way osbuild works is to synthesize a filesystem tree in the
store, then copy it to the disk.  This ensures the label for the
store is `root_t` which ends up being the labeling for
the "infrastructure" bits in the `/ostree` repository in the
target root.

This in turn is blocking a lot of things.

Closes: osbuild#149
@cgwalters
Copy link
Contributor Author

PR in #197

cgwalters added a commit to cgwalters/bootc-image-builder that referenced this issue Mar 6, 2024
The way osbuild works is to synthesize a filesystem tree in the
store, then copy it to the disk.  This ensures the label for the
store is `root_t` which ends up being the labeling for
the "infrastructure" bits in the `/ostree` repository in the
target root.

This in turn is blocking a lot of things.

Closes: osbuild#149
cgwalters added a commit to cgwalters/bootc-image-builder that referenced this issue Mar 6, 2024
The way osbuild works is to synthesize a filesystem tree in the
store, then copy it to the disk.  This ensures the label for the
store is `root_t` which ends up being the labeling for
the "infrastructure" bits in the `/ostree` repository in the
target root.

This in turn is blocking a lot of things.

Closes: osbuild#149
cgwalters added a commit to cgwalters/bootc-image-builder that referenced this issue Mar 7, 2024
The way osbuild works is to synthesize a filesystem tree in the
store, then copy it to the disk.  This ensures the label for the
store is `root_t` which ends up being the labeling for
the "infrastructure" bits in the `/ostree` repository in the
target root.

This in turn is blocking a lot of things.

Closes: osbuild#149
github-merge-queue bot pushed a commit that referenced this issue Mar 7, 2024
The way osbuild works is to synthesize a filesystem tree in the
store, then copy it to the disk.  This ensures the label for the
store is `root_t` which ends up being the labeling for
the "infrastructure" bits in the `/ostree` repository in the
target root.

This in turn is blocking a lot of things.

Closes: #149
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants