Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak in cutout function #2814

Closed
drauh opened this issue Feb 25, 2023 · 1 comment · Fixed by #2815
Closed

Memory leak in cutout function #2814

drauh opened this issue Feb 25, 2023 · 1 comment · Fixed by #2815

Comments

@drauh
Copy link
Contributor

drauh commented Feb 25, 2023

Describe the bug

There is a memory leak in the cutout function when using random sizes and offset and XLA.

Code to reproduce the issue

Run that on Collab with a GPU.

import tensorflow as tf
import tensorflow_addons as tfa
import psutil

tf.config.optimizer.set_jit(True)
for gpu in tf.config.list_physical_devices("GPU"):
    tf.config.experimental.set_memory_growth(gpu, True)
process = psutil.Process()

image_size = 3000
@tf.function()
def random_cutout(img):
    img = tf.tile(img[tf.newaxis, ...], [16, 1, 1, 1])
    mask_size = tf.random.uniform(
        shape=[], minval=image_size // 200, maxval=image_size // 4, dtype=tf.int32
    )
    mask_size *= 2
    offset = tf.random.uniform(
        shape=[tf.shape(img)[0], 2],
        minval=0,
        maxval=image_size - mask_size,
        dtype=tf.int32,
    )
    offset += tf.expand_dims(mask_size, axis=0) // 2
    return tfa.image.cutout(img, tf.stack([mask_size, mask_size]), offset=offset)

img = tf.ones([image_size, image_size, 3])
for _ in range(100):
    for _ in range(1000):
        random_cutout(img)
    print(f"{process.memory_info().rss / 1024 ** 3:.2f} GB")

Outputs :

WARNING:tensorflow:From /usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/pyct/static_analysis/liveness.py:83: Analyzer.lamba_check (from tensorflow.python.autograph.pyct.static_analysis.liveness) is deprecated and will be removed after 2023-09-23.
Instructions for updating:
Lambda fuctions will be no more assumed to be used in the statement where they are used, or at least in the same block. https://github.com/tensorflow/tensorflow/issues/56089
1.48 GB
1.52 GB
1.54 GB
1.62 GB
1.64 GB
1.67 GB
1.69 GB
1.82 GB
1.84 GB
1.86 GB
1.89 GB
1.91 GB
1.94 GB
1.97 GB
2.18 GB
2.21 GB

Cause

This in function definition with outside tensors (lower_pads, ...) is causing the issue. Moving this function to the cutout_ops.py namespace with cutout_shape, padding_dims as args fix the issue

Code line 154 of cutout_ops.py.

        def fn(i):
            padding_dims = [
                [lower_pads[i], upper_pads[i]],
                [left_pads[i], right_pads[i]],
            ]
            mask = tf.pad(
                tf.zeros(cutout_shape[i], dtype=tf.bool),
                padding_dims,
                constant_values=True,
            )
            return mask

        mask = tf.map_fn(
            fn,
            tf.range(tf.shape(cutout_shape)[0]),
            fn_output_signature=tf.TensorSpec(
                shape=image_static_shape[1:-1], dtype=tf.bool
            ),
        )     

If needed (I don't know since tensorflow-addons is winding down), I can make a small merge request to fix this bug.

@bhack
Copy link
Contributor

bhack commented Feb 25, 2023

As cutout is already in keras_cv I suggest a migration. But if it is a small PR please submit it. https://keras.io/api/keras_cv/layers/preprocessing/random_cutout/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants