Fedora 31 Server, podman and SELinux
-
Finally I tried again. I removed all images and container and easyepg directory. I created a new directory in my /home/user/easyepg.
At first I run your SELinux command as root user. After that as user I run the script and I could successfully run the images without any SELinux errorsThat's nice
I found out there was an image missing: easyepg.cron
In the script file https://raw.githubusercontent.com/dlueth/easyepg.minimal/master/init they use the flag--restart unless-stopped
.sh -c "docker create -l easyepg.minimal --name=easyepg.cron -e MODE=\"cron\" --restart unless-stopped ${OPTIONS} qoopido/easyepg.minimal:${TAG} 1> /dev/null"
This flag isn't supported by Podman.
I guess Podman won't start easyepg.cron after server restart?
Is there any solution?I downloaded the script with wget and made it executable. I removed the flag
--reload unless-stopped
and it worked.
As it said, now I could convert the script to Portman and is there any way to get the SELinux label to work after reboot of the server?Thanks a lot for your help so long @stacksofplates
-
@Woti said in Fedora 31 Server, podman and SELinux:
Finally I tried again. I removed all images and container and easyepg directory. I created a new directory in my /home/user/easyepg.
At first I run your SELinux command as root user. After that as user I run the script and I could successfully run the images without any SELinux errorsThat's nice
I found out there was an image missing: easyepg.cron
In the script file https://raw.githubusercontent.com/dlueth/easyepg.minimal/master/init they use the flag--restart unless-stopped
.sh -c "docker create -l easyepg.minimal --name=easyepg.cron -e MODE=\"cron\" --restart unless-stopped ${OPTIONS} qoopido/easyepg.minimal:${TAG} 1> /dev/null"
This flag isn't supported by Podman.
I guess Podman won't start easyepg.cron after server restart?
Is there any solution?I downloaded the script with wget and made it executable. I removed the flag
--reload unless-stopped
and it worked.
As it said, now I could convert the script to Portman and is there any way to get the SELinux label to work after reboot of the server?Thanks a lot for your help so long @stacksofplates
No prob. That flag doesn't work because podman isn't a daemon. You can just create a systemd unit to start it and keep it running.
The SELinux label will still be there after a reboot. It's "temporary" but that only means it will change on a relabel of the filesystem or a
restorecon
command. -
Semanage will permanently change the context. I'll get the exact command when I'm done driving.
-
Sorry it took so long. It's
semanage fcontext -a -t container_file_t <your-directory>
.To do it recursively it's:
semanage fcontext -a -t container_file_t "<your-directory>(/.*)?"
-
No stress sir Thanks for the command. I'll try it later.
-
@stacksofplates your semanage commands are working fine
-
Hello again
I have now created a systemd service for podman easyepg by following this tutorial:
https://www.redhat.com/sysadmin/podman-shareable-systemd-services
and it looks like it works.
Is there any way I can test if updating of epg channel informasjon is working as expected by triggering manuelly? The cronjob executes 2 a.m.After reboot the service is loaded but inactive. I have to activate manually? How can I figure out what's going wrong during boot?
podman generate systemd --name easyepg.cron # container-easyepg.cron.service # autogenerated by Podman 1.8.0 # Mon Mar 16 22:40:13 CET 2020 [Unit] Description=Podman container-easyepg.cron.service Documentation=man:podman-generate-systemd(1) [Service] Restart=on-failure ExecStart=/usr/bin/podman start easyepg.cron ExecStop=/usr/bin/podman stop -t 10 easyepg.cron PIDFile=/run/user/1000/containers/overlay-containers/a5482f12e8b718d6d080eb0a10283b456e58f57c2f1bd22c64e49f9e91073da8/userdata/conmon.pid KillMode=none Type=forking [Install] WantedBy=multi-user.target
systemctl --user status container-easyepg.service β container-easyepg.service - Podman container-easyepg.cron.service Loaded: loaded (/home/twolf/.config/systemd/user/container-easyepg.service; disabled; vendor preset: enabled) Active: active (running) since Tue 2020-03-17 21:30:35 CET; 1s ago Docs: man:podman-generate-systemd(1) Process: 1405 ExecStart=/usr/bin/podman start easyepg.cron (code=exited, status=0/SUCCESS) Main PID: 1429 (conmon) Tasks: 4 (limit: 2333) Memory: 23.0M CPU: 1.092s CGroup: /user.slice/user-1000.slice/[email protected]/container-easyepg.service ββ1420 /usr/bin/fuse-overlayfs -o lowerdir=/home/twolf/.local/share/containers/storage/overlay/l/2YMPIRCLJIU> ββ1423 /usr/bin/slirp4netns --disable-host-loopback --mtu 65520 -c -e 3 -r 4 --netns-type=path /run/user/100> ββ1429 /usr/bin/conmon --api-version 1 -s -c a5482f12e8b718d6d080eb0a10283b456e58f57c2f1bd22c64e49f9e91073da> MΓ€r 17 21:30:33 localhost.localdomain systemd[981]: Starting Podman container-easyepg.cron.service... MΓ€r 17 21:30:35 localhost.localdomain podman[1405]: 2020-03-17 21:30:35.237845063 +0100 CET m=+1.249145219 container in>MΓ€r 17 21:30:35 localhost.localdomain podman[1405]: 2020-03-17 21:30:35.287066083 +0100 CET m=+1.298366135 container st>MΓ€r 17 21:30:35 localhost.localdomain podman[1405]: easyepg.cron MΓ€r 17 21:30:35 localhost.localdomain systemd[981]: Started Podman container-easyepg.cron.service.
podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a5482f12e8b7 docker.io/qoopido/easyepg.minimal:latest 6 days ago Up 12 minutes ago easyepg.cron
-
If it's really using cron, I don't know of a way to make it test without letting that happen.
As for the other issue, I've had that as well and I don't know a way around it. I'm still trying to figure that out. I run plex in a container and every time the host reboots I have to restart it.
-
As for now the server is rebooting once or twice in a month due updates. There's no big problem to start the service manually. Maybe one day we figure out why it isn't starting automatically.
Anyway. Thanx for your effort to get rid of the SElinux problem.
-
So I got a container to start with the system. I don't like what
podman generate systemd
gives you because it defeats the purpose of a container. Here's what I have:[Unit] Description=Plex After=network.target [Service] TimeoutStartSec=5m Restart=always ExecStartPre=-/usr/bin/podman rm -f plex ExecStart=podman run --name plex -v /mnt/media/movies:/movies -v /mnt/media/tv:/tv -v /mnt/media/music:/music -v /home/jhooks/plex/config:/config -p 32400:32400 -p 32400:32400/udp -p 32469:32469 -p 32469:32469/udp -p 5353:5353/udp -p 1900:1900/udp linuxserver/plex ExecStop=-/usr/bin/podman kill plex Type=simple User=jhooks RestartSec=30 [Install] WantedBy=multi-user.target
I was running
ExecStart=podman run -d --rm --name plex blah blah
but even when I used forking it was failing to track the process.This will kill the container and spin up a new one for me each time which is what I wanted. That way I'm not dependent on container IDs existing.
-
Heiho
I haven't seen your message yet. Now 1 month has passed
Your script starts Podman automatically at boot?Are you using Plex? I am using Kodi
-
@Woti said in Fedora 31 Server, podman and SELinux:
Heiho
I haven't seen your message yet. Now 1 month has passed
Your script starts Podman automatically at boot?Are you using Plex? I am using Kodi
Yeah I got it to work! Oh nice
-
Sounds good I'll try your solution and report.
-
Hei, I wanted to try your solution. FΓΈrst, I wanted to run meg container setup but I get this error:
systemctl --user status container-easyepg.service Failed to connect to bus: No such file or directory
I haven't changed anything since the last time and the container file exists...
I can start it in Cockpit but not in the console. Strange...I figured out: I need to issue the above command as user not as root.
Is it wrong to issuer this command as user? I setted up podman to use easyepg as user not as root.
Maybe that's why the container not starts during boot?Which podman owner are you using @stacksofplates : user or root?
-
@Woti said in Fedora 31 Server, podman and SELinux:
Hei, I wanted to try your solution. FΓΈrst, I wanted to run meg container setup but I get this error:
systemctl --user status container-easyepg.service Failed to connect to bus: No such file or directory
I haven't changed anything since the last time and the container file exists...
I can start it in Cockpit but not in the console. Strange...I figured out: I need to issue the above command as user not as root.
Is it wrong to issuer this command as user? I setted up podman to use easyepg as user not as root.
Maybe that's why the container not starts during boot?Which podman owner are you using @stacksofplates : user or root?
I'm using user but not that way. I put the service in
/etc/systemd/system
and set a user in the unit file. So I still start it withsudo systemctl restart plex
but systemd uses the user defined in the unit file to run the service. -
@stacksofplates said in Fedora 31 Server, podman and SELinux:
@Woti said in Fedora 31 Server, podman and SELinux:
Hei, I wanted to try your solution. FΓΈrst, I wanted to run meg container setup but I get this error:
systemctl --user status container-easyepg.service Failed to connect to bus: No such file or directory
I haven't changed anything since the last time and the container file exists...
I can start it in Cockpit but not in the console. Strange...I figured out: I need to issue the above command as user not as root.
Is it wrong to issuer this command as user? I setted up podman to use easyepg as user not as root.
Maybe that's why the container not starts during boot?Which podman owner are you using @stacksofplates : user or root?
I'm using user but not that way. I put the service in
/etc/systemd/system
and set a user in the unit file. So I still start it withsudo systemctl restart plex
but systemd uses the user defined in the unit file to run the service.Okay. I have mine in /home/user/.config... one or another hidden directory created by podman generate commando.
Stupid question maybe: but what is the unit file? -
@Woti said in Fedora 31 Server, podman and SELinux:
@stacksofplates said in Fedora 31 Server, podman and SELinux:
@Woti said in Fedora 31 Server, podman and SELinux:
Hei, I wanted to try your solution. FΓΈrst, I wanted to run meg container setup but I get this error:
systemctl --user status container-easyepg.service Failed to connect to bus: No such file or directory
I haven't changed anything since the last time and the container file exists...
I can start it in Cockpit but not in the console. Strange...I figured out: I need to issue the above command as user not as root.
Is it wrong to issuer this command as user? I setted up podman to use easyepg as user not as root.
Maybe that's why the container not starts during boot?Which podman owner are you using @stacksofplates : user or root?
I'm using user but not that way. I put the service in
/etc/systemd/system
and set a user in the unit file. So I still start it withsudo systemctl restart plex
but systemd uses the user defined in the unit file to run the service.Okay. I have mine in /home/user/.config... one or another hidden directory created by podman generate commando.
Stupid question maybe: but what is the unit file?It's the .service file. They're called units because there's a handful of different types (service, timer, path, target, etc)
-
Finally I found the solution here on github: https://github.com/containers/libpod/issues/5494
I used podman v1.8.0 this time I generated the easyepg.service file with podman generate. There was a bug in this version which not generated default.target. In later version it is fixed. Now it is working
[Install] WantedBy=multi-user.target default.target
-
@Woti said in Fedora 31 Server, podman and SELinux:
Finally I found the solution here on github: https://github.com/containers/libpod/issues/5494
I used podman v1.8.0 this time I generated the easyepg.service file with podman generate. There was a bug in this version which not generated default.target. In later version it is fixed. Now it is working
[Install] WantedBy=multi-user.target default.target
Ah ok. I don't use the generate hardly ever because it kind of defeats the purpose of a container. It hard codes the hash for the container instead of a name for some reason.
-
I see I haven't tried your solution yet. But I did read about your kind of solution on Redhat Access sites.
The case with default.target is that, if podman containers runs as user they have no access on multi-user.target through systemd. If I did understand right That's why you have to use default.target instead.I'll try your solution in a VM soonly.