Pass Intel iGPU to an Unprivileged LXC Container (Proxmox)
(Last Updated On: March 28, 2024)Most of us Hobbyist home lab users just have a mini pc without a video card but a CPU that has a iGPU inside. If we want to use some application that need hardware acceleration (Like Nextcloud memories, or Frigate) there are three ways to follow.
- Use a VM and pass the iGPU to it. Which carries the problem that you will not be able to see the console of your host when connecting a monitor, and, no other container or VM will be able to use it.
- Use a privileged LXC container, that has access to the host drivers as well as
render
andvideo
groups. Which is not that good from a security view point. - Use an Unprivileged LXC container and mount the driver while mapping users.
Of course, we will do the third one.
First, check that you can see your drives, on the host machine run:
ls -l /dev/dri
You should see an output like this one. Here the device’s major number is 226
and the minors are 0
and 128
:
drwxr-xr-x 2 root root 80 Mar 21 08:09 by-path
crw-rw---- 1 root video 226, 0 Mar 21 08:09 card0
crw-rw---- 1 root render 226, 128 Mar 21 08:09 renderD128
Also, check that you can see the iGPU running:
lspci -nnv | grep VGA
Your output should be something like:
00:02.0 VGA compatible controller [0300]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller [8086:0412] (rev 06) (prog-if 00 [VGA controller])
Edit your /etc/pve/lxc/xxx.conf
file and add
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
/dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file,mode=666 0 0
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
If you restart your LXC container now and run ls -l /dev/dri
there, you should see an output similar to:
crw-rw---- 1 nobody nobody 226, 0 Mar 21 07:09 card0
crw-rw---- 1 nobody nobody 226, 128 Mar 21 07:09 renderD128
Note that we do not have any group assigned there! We need to allow the host to map the UID of video
and render
to our container and then map them.
First get the ids of each group using:
getent group video | cut -d: -f3 # 44 in my Host machine
getent group render | cut -d: -f3 # 103 in my Host machine
then add allow the host to map the UID editing /etc/subgid
root:44:1
root:103:1
Now we need to check the UID of video
and render
on THE LXC container.
getent group video | cut -d: -f3 # 44 in my LXC
getent group render | cut -d: -f3 # 108 in my LXC
To map those Ids from host to lxc we have to ***carefully** add those mappings to your container’s /etc/pve/lxc/xxx.conf
:
lxc.idmap: u 0 100000 65536
lxc.idmap: g 0 100000 44
lxc.idmap: g 44 44 1
lxc.idmap: g 45 100045 62
lxc.idmap: g 108 103 1
lxc.idmap: g 109 100107 65427
Now the return of XXXX should be similar to:
$ ls -l /dev/dri
total 0
drwxr-xr-x 2 nobody nogroup 80 Mar 21 11:23 by-path
crw-rw---- 1 nobody video 226, 0 Mar 21 11:23 card0
crw-rw-rw- 1 nobody render 226, 128 Mar 21 11:23 renderD128
As last step, we have to change the permissions of the device ON THE HOST, so it can be run by the groups (be aware, this will allow the device to be used by ANYONE)
chmod 0666 /dev/dri/renderD128
One last thing, this will be reset by proxmox after each reboot, so we need to add a rule that applies it
nano /etc/udev/rules.d/59-igpu-chmod666.rules
and add the content:
KERNEL=="renderD128", MODE="0666"
Extra! Nextcloud Memories Hardware acceleration
We can just follow the Steps of https://memories.gallery/hw-transcoding/#internal-transcoder. You can skip setting the permission for the driver to 666
, it is already being done by the mount config and..
Issues:
I got a couple of problems while following the transcoding.
- My Nextcloud instance is running on Debian, to the installation of the drivers did not work. I had to run this on the LXC container:
- Install
apt install software-properties-common
- Add the
non-free
with repoapt-add-repository -y non-free
- Install the driver
apt install -y intel-media-va-driver-non-free
- And this on the host
chmod 666 /dev/dri/renderD128
(This gives RW permissions to ALL users!)
- Install
- I had a running instance of
nextcloud
and for the life of me I could not map it correctly, to get this working I have to create a new LXC instance, make the mapping and then installnextcloud
!
[…] We also need to save an udev rule so this is applied at boot time (Similar to what we did on the Pass Intel IGPU To An Unprivileged LXC Container (Proxmox) tutorial) […]