Recently, I acquired a second-hand Nvidia Tesla M40 to experiment with AGI applications. With some additional second-hand x79 motherboard and CPU components on hand, I decided to assemble them into a dedicated AI server.
A functioning computer with an x16 PCI-E slot.
An Nvidia GPU.
Proxmox Installation: Download and install Proxmox on your computer.
Nvidia Linux Driver: Download the Nvidia Linux driver from Nvidia's website. I’m using version 550 for this guide.
To run the Nvidia driver installer on both the Proxmox host and the LXC container, we need to transfer the downloaded file to both systems. On my Ubuntu PC, I navigate to the Downloads
folder and run the command:
The nouveau
driver is an open-source alternative for Nvidia GPUs. To use the official driver, we need to disable it:
In the Proxmox interface, select the Proxmox node and click the Shell
button.
Create the file /etc/modprobe.d/nvidia-installer-disable-nouveau.conf
and add the following lines:
Reboot the system reboot
.
Once the system is back up, install the essential packages for the Nvidia driver and run the installation script:
Next, add the following lines to /etc/modules-load.d/modules.conf
:
Update the initramfs with:
Create the file /etc/udev/rules.d/70-nvidia.rules
and include:
Finally, reboot again with reboot
.
Open a shell on your Proxmox host and run:
You should see output similar to:
Take note of the numbers after the owner group (the second root). In my case, they are 195
, 235
, and 238
.
Add the following to the LXC config for the container you want to access the GPU:
Repeat for each device as necessary, then start the LXC container.
Open the console in your LXC container, log in, and run:
After rebooting the container, verify GPU access by typing nvidia-smi
.
Note: For InvokeAI, I start the application via command line, so all dependencies must be installed manually. However, Ubuntu 24.04 has an incompatible Python version, necessitating another container running Ubuntu 22.04.