Skip to content Skip to sidebar Skip to footer

Tensorflow Crash With Cudnn_status_alloc_failed

Been searching the web for hours with no results, so figured I'd ask here. I'm trying to make a self driving car following Sentdex's tutorial, but when running the model, I get a b

Solution 1:

In my case, the issue happened because another python console with tensorflow imported was running. Closing it solved the problem.

I have Windows 10, the main errors were :

failed to create cublas handle: CUBLAS_STATUS_ALLOC_FAILED

Could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED

Solution 2:

Probably you're running out of GPU memory.


If you're using TensorFlow 1.x:

1st option) set allow_growth to true.

import tensorflow as tf    
config = tf.ConfigProto()
config.gpu_options.allow_growth=True
sess = tf.Session(config=config)

2nd option) set memory fraction.

# change the memory fraction as you want

import tensorflow as tf
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.3)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))

If you're using TensorFlow 2.x:

1st option) set set_memory_growth to true.

# Currently the ‘memory growth’ option should be the same for all GPUs.# You should set the ‘memory growth’ option before initializing GPUs.import tensorflow as tf
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
  try:
    for gpu in gpus:
      tf.config.experimental.set_memory_growth(gpu, True)
  except RuntimeError as e:
    print(e)

2nd option) set memory_limit as you want. Just change the index of gpus and memory_limit in this code below.

import tensorflow as tf
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
  try:
    tf.config.experimental.set_virtual_device_configuration(gpus[0], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024)])
  except RuntimeError as e:
    print(e)

Solution 3:

Try to set:

os.environ['TF_FORCE_GPU_ALLOW_GROWTH'] = 'true' solved my problem

my environment:

Cudnn 7.6.5

Tensorflow 2.4

Cuda Toolkit 10.1

RTX 2060

Solution 4:

Try to add the cuda path to environment variable. It's seems that the problem it's with cuda.

Set the CUDA Path in ~/.bashrc (edit with nano):

#Cuda Nvidia path$ export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64"$ export CUDA_HOME=/usr/local/cuda

Solution 5:

I encountered the same problem, then I found out that because I'm also using GPU for run other stuffs even it doesn't show on task manager (windows) using GPU. Maybe even things like (rendering videos, video encoding or play heavy workload game, coin mining...). If you think it's still using heavy GPU, just close it off and problem solve.

Post a Comment for "Tensorflow Crash With Cudnn_status_alloc_failed"