Using Tensorflow with GPU within RMarkdown

Tensorflow setting for using local GPU

Image credit: **Photo by Peter Steiner 🇨🇭 on Unsplash **

The Problem

I have set up the GPU on my workstation and the TensorFlow is able to access to the GPU. You may refer to my GPU installation guide.

However, when I tried to use TensorFlow under RMarkdown, it reported the following error:

2022-02-21 08:44:09.486498: E tensorflow/stream_executor/cuda/] failed to create cublas handle: CUBLAS_STATUS_ALLOC_FAILED

2022-02-21 08:44:09.486805: W tensorflow/core/framework/] OP_REQUIRES failed at matmul_op_impl.h:442 : INTERNAL: Attempting to perform BLAS operation using StreamExecutor without BLAS support

It seems the main error is “failed to create cublas handle: CUBLAS_STATUS_ALLOC_FAILED”. The second error “Attempting to perform BLAS operation using StreamExecutor without BLAS support” should be due to the first error.

The Solution

I googled but I couldn’t find any working solution except for this link. It seems the problem is due to the allocation of GPU memory. You may set the TensorFlow to dynamically allocate GPU memory using the following Python code.

import tensorflow as tf
gpus = tf.config.list_physical_devices(device_type = 'GPU')
tf.config.experimental.set_memory_growth(gpus[0], True)

Now the question is how I can incorporate the above Python code into the RMarkdown file. Thanks to the package:reticulate and the source_python(), I use the following R code chunk within the RMarkdown.


There are two benefits of doing so:

  • keep a separate setting file for other usage
  • write all within one R code chunk, ie, no need to have a separate Python code chunk.

I hope you will find this useful.

Wang Jiwei
Wang Jiwei
Associate Professor

My current research/teaching interests include digital transformation and data analytics in accounting.