Multiple GPU Support

Eddy for Nuke can take advantage of multiple GPUs in a system, distributing both simulation and rendering tasks from a single Eddy process across the available GPU devices.

Requirements

The GPU devices to be used together must support peer-to-peer (P2P) communication. Only some GPUs support P2P communication:

  • P2P is only supported on cards in the Quadro, Tesla, and GeForce Titan families.

In particular note that P2P is not supported on regular GeForce cards, non-Quadro RTX cards, or GeForce “Ti” cards such as the RTX 2080 Ti.

There are a few additional requirements in order for P2P to be available:

  • The GPUs must be from the same architectural generation. Using identical GPUs is preferred.

  • The GPUs must be connected to the same PCIe root complex. This is usually only an issue if using a dual CPU socket motherboard.

  • An NVLink bridge between the GPUs is not required, but will improve performance if present, particularly for simulation.

For Windows there are additional requirements:

  • The GPU devices must be in TCC mode. Note that a device in TCC mode cannot be used for display output, it is entirely dedicated to compute. Not all cards support TCC mode, in general it is only available on Quadro, Tesla, and GeForce Titan GPUs.

Note

To change the driver to TCC mode for a device, use the command nvidia-smi -g 0 -dm 1, replacing 0 with the ID of the GPU.

Enabling multiple GPU support

Automatic selection

When Eddy for Nuke starts it will automatically select the fastest available GPU in the system. If there are multiple GPUs they will not be automatically enabled, they must be manually selected as described below. The console window will show which GPU has been chosen, and which GPUs are inactive.

Manual device selection

If the automatic selection is not choosing the desired device, then the GPUs used by Eddy can be explicitly chosen using the EDDY_DEVICE_LIST environment variable. This can be set to a comma separated list of the device IDs. For example use EDDY_DEVICE_LIST=0,2 to enable devices 0 and 2. As a convenience the device IDs are displayed by Eddy in the console window during startup, including the IDs for inactive devices.

Note

The environment variable CUDA_VISIBLE_DEVICES can also be used to specify the devices to use. However note that this will affect all CUDA enabled applications, not only Eddy for Nuke. It will also change the device IDs as seen by Eddy, so we recommend using EDDY_DEVICE_LIST instead.

Performance

Rendering performance can be expected to scale at an almost linear rate with the enabled GPUs.

Simulation performance will generally only increase once the simulation becomes sufficiently large. A rule of thumb is that if the simulation is taking several seconds per frame then multiple GPUs will increase performance. Simulations that are faster than this may not benefit from multiple GPUs.