Torch not compiled with CUDA enabled in PyTorch

PyTorch is a popular open-source machine learning library for Python that is primarily used for developing and training deep learning models.

If you have an NVIDIA GPU and the necessary CUDA software installed on your machine, PyTorch can utilize the GPU to significantly speed up the training process.

Sometimes, you may encounter AssertionError: Torch not compiled with CUDA enabled error, which basically says that the PyTorch version you’re using does not support CUDA (There are separate versions with and without CUDA support.)

This article is going to show you a few possible ways to fix “Torch not compiled with CUDA enabled”. Because the underlying issue varies from setups to setups, you may have to try each of the solutions in the post until the error message goes away.

Completely reinstall PyTorch with CUDA

Uninstall current PyTorch

PyTorch can be installed in a few different ways. Therefore, there are various ways to uninstall it, depending on which package manager you’re using.

However, below is the combination of commands recommended by PyTorch themselves to fully uninstall the package, no matter which package manager installed it in the past.

conda uninstall pytorch pip uninstall torch pip uninstall torch # run this command twice
Code language: PHP (php)

Notice that you wouldhave to run pip uninstall torch multiple times (usually two). You’ll know torch is fully uninstalled when you see WARNING: Skipping torch as it is not installed.

Install PyTorch with CUDA support

In order to install PyTorch with CUDA support, you need to have the following prerequisites:

  1. A CUDA-compatible NVIDIA GPU with drivers installed. You can check if your GPU is compatible in this CUDA-enabled GPUs list.
  2. The CUDA Toolkit, which includes the nvcc compiler and the cuda library. Those libraries are required to build PyTorch from source.
  3. The cuDNN library, which provides optimized implementations of starndard deep learning routines.

Once you have these prerequisites installed, install PyTorch with CUDA support by following these steps:

  1. Launch a terminal window and run pip install --upgrade pip. This ensure that you have the latest version of pip
  2. Go to https://pytorch.org/get-started/locally/ and choose the PyTorch Build, current operating system, package manager and CUDA version suitable to your setup. Then copy the generated command to your clipboard.
  3. Run the command you’ve just copied in a terminal/command prompt window. Answer yes to each prompt if needed.
  4. If you want to install a previous version of PyTorch, follow instructions at https://pytorch.org/get-started/previous-versions/.

Remove “cpuonly” package

In a few specific setups, uninstalling PyTorch itself is not enough. You may have to remove the “cpuonly” package, too.

If the solution above doesn’t work, try uninstalling both PyTorch and cpuonly package using these commands:

conda uninstall pytorch conda uninstall cpuonly pip uninstall pytorch --no-cache-dir pip uninstall cpuonly --no-cache-dir

Notice that we have to use --no-cache-dir in the commands to bypass pip cache, otherwise you will end up using the no-CUDA version already downloaded in your system.

Verify if CUDA is available

Once the new installation finishes, you may want to check if the GPU is available for PyTorch. Below is the recommended way to do this using is_available() (code from the PyTorch | Get Started page)

import torch torch.cuda.is_available()
Code language: CSS (css)

If the command above returns False, you either:

  • Don’t have any GPU.
  • The Nvidia drivers aren’t loaded, thus the OS can’t recognize it.
  • The GPU is hidden by the environmental variable CUDA_VISIBLE_DEVICES. When CUDA_VISIBLE_DEVICES is set to -1, all of your devices are hidden. This value may be printed out in your code using os.environ['CUDA_VISIBLE_DEVICES']

In case torch.cuda.is_available() returns True, it does not necessarily imply that the GPU is being used. When you construct a device in Pytorch, you may assign it tensors. Tensors are assigned to the CPU by default.

We hope that the information above is useful and helped you successfully fix the AssertionError: Torch not compiled with CUDA enabled error.

We’ve also written a few other guides related to PyTorch as well, such as torch.squeeze vs torch.unsqueeze, Save model with torch.save or a few torch.squeeze code examples from open-source projects.

If you have any questions, then please feel free to ask in the comments below.

Leave a Comment