Intel for Display, NVIDIA for ML

I’ve implemented a simplified version of the procedure described here (Intel for display, Nvidia for computing · GitHub) to insure that the NVIDIA GPU of my tensorbook is dedicated only to tf/keras/python processes while using the Intel GPU to drive the ubuntu/gnome GUI.

Prior to this, gnome/xorg were consuming around 300-600Mb of GPU RAM, which is a considerable share of the total GPU memory.

I personally found that it is not necessary to purge nvidia SW to achieve this result: it is enough to use the Nvidia X Server tool to switch the PRIME configurations and editing the xorg.conf file to add the device (I added as specific device for the NVIDIA card with no “screen” associated with it)…

The only caveat is that opening a LibreOffice Writer would add a process to the nvidia-smi process list. Any ideas on how to stop other applications from consuming NVIDIA GPU resources would be welcome.

Maybe this is the type of pre-configurations we should expect out of the box for a lambdabook or a workstation to maximize NVIDIA GPU utilization for ML?

Happy to share my thoughts on this…

For those interested in trying this out, this is the xorg.conf file I used (in /etc/X11/)
Section “ServerLayout”
Identifier “Layout0”
Screen 0 “BuiltIn”


Section “Screen”
Identifier “BuiltIn”
Device “intel”

Section “Device”
Identifier “intel”
Driver “intel”
VendorName “Intel Corporation”
BusID “PCI:0:2:0”

Section “Device”
Identifier “nvidia”
Driver “nvidia”
VendorName “NVIDIA Corporation”
BusID “PCI:2:0:0”