Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ioredis left open handles in Jest #1088

Closed
abriginets opened this issue Mar 31, 2020 · 33 comments
Closed

ioredis left open handles in Jest #1088

abriginets opened this issue Mar 31, 2020 · 33 comments

Comments

@abriginets
Copy link

Hello. I can't resolve the issue with my Jest tests because of ioredis. Recently I started recieving messages after Jest finished all tests:

A worker process has failed to exit gracefully and has been force exited. This is likely caused by tests leaking due to improper teardown. Try running with --runInBand --detectOpenHandles to find leaks.

After a few days of struggling I found the module called leaked-handles, installed it and started debugging. And it seems like both redis.quit() and redis.disconnect() not working and connection remains opened after those calls. The error is:

at D:\github\umbress\node_modules\ioredis\built\connectors\StandaloneConnector.js:58:45
tcp stream {
  fd: -1,
  readable: true,
  writable: true,
  address: { address: '127.0.0.1', family: 'IPv4', port: 59823 },
  serverAddr: null
}

I couldn't find any mentioning of ioredis at Jest repo and viceversa. Is there anything that can be done to close connection properly?

@mojjy
Copy link

mojjy commented Apr 2, 2020

Hi, I am experiencing the same behaviour with ioredis and jest

@alavers
Copy link
Contributor

alavers commented Apr 15, 2020

How about performing a redis.quit() or redis.disconnect() inside of your after or afterEach teardown methods?

@victorhtc
Copy link

I'm facing the same issue and had to use jest's --forceExit option, which clearly is not the best solution. It seems that redis.disconnect() should be an async operation so that we could add a callback to be executed after the connection is properly terminated. Currently it just triggers the disconnection but does not wait it to be complete. When jest finishes executing the tests the connection is still open.

@ingoziemann
Copy link

I think something like this in after should do the trick:

await new Promise( (reject, resolve) => {
     client.once("end", resolve());
     client.quite();
})

@jsking216
Copy link

Having a similar issue where I am calling quit on all ioredis clients in my afterAll hook and it still is leaking events after the tests.

@beneater
Copy link

I'm having a similar issue. I was able to work around it by continually checking the connection status. If it's not disconnected, just wait 200ms and check again, etc:

redis.disconnect();
while (redis.status === "connected") {
  await new Promise(r => setTimeout(r, 200));
}

@alanbueno
Copy link

+1 with the issue here, had both redis.quit() awaited and redis.disconnect() called but nothing :/

@Roms1383
Copy link

How about performing a redis.quit() or redis.disconnect() inside of your after or afterEach teardown methods?

I've tried both without any success too🤷‍♂️

@Roms1383
Copy link

I finally got it working, is anybody interested in a sample repo?

@jsking216
Copy link

I finally got it working, is anybody interested in a sample repo?

I'd love to see a working solution.

@jaredjj3
Copy link

I finally got it working, is anybody interested in a sample repo?

Let's compare.

const teardown = async (redis: RedisClient) => {
  await new Promise((resolve) => {
    redis.quit();
    redis.on('end', resolve);
  });
};

afterAll(async () => {
  await teardown(redis);
});

@Roms1383
Copy link

Roms1383 commented Oct 15, 2020

No need to compare @jaredjj3, you got it right :)
It also work with simply

afterAll(async () => {
  await redis.quit()
})

I originally got the issue because I was wrapping it in a Nest.js service but I realized afterwards that I wasn't correctly handling lifecycle like :

import IORedis, * as Redis from 'ioredis'
import { Injectable, OnApplicationBootstrap, OnApplicationShutdown } from '@nestjs/common'

@Injectable()
export class RedisService implements OnApplicationBootstrap, OnApplicationShutdown {
  private redis: IORedis.Redis
  // ...
  async onApplicationBootstrap() {
    redis = new Redis()
  }
  async onApplicationShutdown() {
    await redis.quit()
  }
}

In the meantime I switched to Nest.js basic implementation which use redis under the hood and for which using promisify also made my life easier.

@Roms1383
Copy link

@jsking216 I made a sample repo with tests for ioredis / redis / Nest.js 👌

@CaptainJojo
Copy link

Same with the new version 4.19 but with 4.17 all it's ok

@AVVS
Copy link
Contributor

AVVS commented Oct 28, 2020

Supposed to be fixed in the latest release

@barclayd
Copy link

Found the same issue in 4.19.2.

Able to remove the error by using:

  afterAll(async () => {
     await this.redis.quit();
  });

@tot-ra
Copy link

tot-ra commented Aug 10, 2021

for me, closing publisher helped (redis quit and disconnect was not enough)

const pub = new RedisPubSub({
    publisher: redis
})

pub.close(); // <----

@swkim0572
Copy link

@tot-ra how to get RedisPubSub ?

@shahinghasemi
Copy link

Still having the same problem in our Test suites, any workable solution?

@tot-ra
Copy link

tot-ra commented Sep 7, 2021

@swkim0572 oops. sorry, it was imported from import { RedisPubSub } from 'graphql-redis-subscriptions'; so it implements PubSubEngine and its not very much ioredis related

@MehdiSaffar
Copy link

I am having the same problem and it has been giving me a headache, I don't understand why I still see " ioredis:redis write command[127.0.0.1:6379]: 0 -> brpoplpush([ 'bull:worker:wait', 'bull:worker:active', '5' ]) +5s" getting printed even when client.status is end

@georgios-tsoumas
Copy link

I am having the same problem and it has been giving me a headache, I don't understand why I still see " ioredis:redis write command[127.0.0.1:6379]: 0 -> brpoplpush([ 'bull:worker:wait', 'bull:worker:active', '5' ]) +5s" getting printed even when client.status is end

I have the same issue. My implementation also involves Bull, and more specifically a blocking request like yours. It seems I can't quit a blocking client. Both client and subscriber though can terminate just fine. I'm studying a bit the way Bull itself closes the connections, cause I guess they faced the same "problem":

async function redisClientDisconnect(client) {
  if (client.status !== 'end') {
    let _resolve, _reject;
    return new Promise((resolve, reject) => {
      _resolve = resolve;
      _reject = reject;
      client.once('end', _resolve);

      pTimeout(
        client.quit().catch(err => {
          if (err.message !== 'Connection is closed.') {
            throw err;
          }
        }),
        500
      ).catch(() => {
        client.once('error', _reject);
        client.disconnect();
        if (['connecting', 'reconnecting'].includes(client.status)) {
          resolve();
        }
      });
    }).finally(() => {
      client.removeListener('end', _resolve);
      client.removeListener('error', _reject);
    });
  }
}

@dobesv
Copy link

dobesv commented May 20, 2022

I dug into this a bit and setting DEBUG='redis:*' shows some interesting information. What I see is that the redis client transitions to end but prints a bit later than it is destroying the TCP connection:

  ioredis:redis write command[127.0.0.1:40029]: 0 -> quit([]) +8ms
  ioredis:redis write command[127.0.0.1:40029]: 0 -> quit([]) +2ms
  ioredis:redis status[127.0.0.1:40029]: ready -> close +6ms
  ioredis:connection skip reconnecting since the connection is manually closed. +20ms
  ioredis:redis status[127.0.0.1:40029]: close -> end +1ms
  ioredis:redis status[127.0.0.1:40029]: ready -> close +2ms
  ioredis:connection skip reconnecting since the connection is manually closed. +4ms
  ioredis:redis status[127.0.0.1:40029]: close -> end +2ms
... some time passes ...
  ioredis:AbstractConnector stream 127.0.0.1:40029 still open, destroying it +0ms
  ioredis:AbstractConnector stream 127.0.0.1:40029 still open, destroying it +4ms

So actually the redis instance has transitioned to state end without waiting for the socket to actually end. You can see this in StandaloneConnector that it sets a timeout to destroy the socket if it doesn't end within a certain timeout. However, the redis instance itself has already emitted the end event and set the end state before this timeout.

@dobesv
Copy link

dobesv commented May 20, 2022

Looking around a bit, it seems the redis client called end on the stream to disconnect, which sends a TCP FIN to the redis server. It's strange that in this case the redis server is not responding with a FIN response within the given timeout, normally redis servers are super fast at everything.

@dobesv
Copy link

dobesv commented May 20, 2022

w.r.t. issues with bull keep in mind that it manages its own redis connections to some degree, so it's possible that even though you think you've closed the connection, there might be other redis client instances hanging around you hadn't closed.

@dobesv
Copy link

dobesv commented May 20, 2022

I think there might be a bug in here:

class AbstractConnector {
    constructor(disconnectTimeout) {
        this.connecting = false;
        this.disconnectTimeout = disconnectTimeout;
    }
    check(info) {
        return true;
    }
    disconnect() {
        this.connecting = false;
        if (this.stream) {
            const stream = this.stream; // Make sure callbacks refer to the same instance
            const timeout = setTimeout(() => {
                debug("stream %s:%s still open, destroying it", stream.remoteAddress, stream.remotePort);
                stream.destroy();
            }, this.disconnectTimeout);
            stream.on("close", () => clearTimeout(timeout));
            stream.end();
        }
    }
}

If the stream was closed already (e.g. using the quit command), the close event was already delivered and the timeout is left running.

@dobesv
Copy link

dobesv commented May 20, 2022

Hmm no the bug was in my own code, arguably:

async function main() {
  const Redis = require('ioredis');
  const client = new Redis({
    port: 6379,
    host: 'localhost',
    maxRetriesPerRequest: 1,
  });
  client.on('end', () => {
    client.disconnect();
  })
  await client.set('hello', 'world')
  await client.quit();
  client.disconnect();
}
main();

I was calling client.disconnect() in my end handler, which does lead to this bug but I don't think this is something ioredis really need to support as it's my own dumb fault.

Sorry for the noise!

@ninthsun91
Copy link

I'm having a similar issue. I was able to work around it by continually checking the connection status. If it's not disconnected, just wait 200ms and check again, etc:

redis.disconnect();
while (redis.status === "connected") {
  await new Promise(r => setTimeout(r, 200));
}

This worked for me. bb

@Yuri-Lima
Copy link

hey, guys. I had the same issue. I was working around that. Not sure but what basically did was add the timeout.
If you still have the issue try to just increase the timeout till y get the result.

afterAll(async () => {
    const ok = await redis.quit();
    if (ok === 'OK') {
      console.log('Redis connection closed');
    }
  }, 180000); // 3 minutes

@luin
Copy link
Collaborator

luin commented Jan 25, 2023

Looks like we already have solutions for the issue so closing.

@superern
Copy link

I finally got it working, is anybody interested in a sample repo?

Let's compare.

const teardown = async (redis: RedisClient) => {
  await new Promise((resolve) => {
    redis.quit();
    redis.on('end', resolve);
  });
};

afterAll(async () => {
  await teardown(redis);
});
await new Promise<void>(resolve => {
      redis.disconnect();
      redis.on('end', resolve);
});

console.log(redis.status === 'end')

This works

@fbartkow-from-f
Copy link

Not sure if this will help some people, but something we found while working on this was that the act of loading up of the config/redisStore was actually the issue.

Essentially, we had this cache config file setup like this:

import { redisStore, RedisStore } from 'cache-manager-ioredis-yet';
import { registerAs } from '@nestjs/config';

export const CACHE_CONFIG = 'cache';

export interface CacheConfig {
  store: RedisStore;
  host: string;
  port: number;
  ttl: number;
}

export default registerAs(CACHE_CONFIG, async () => {
  const host = // Host stuff
  const port = // Port stuff
  const ttl = // TTL stuff
  return {
    store: await redisStore({
      host,
      port,
      ttl,
      tls: // TLS stuff,
      maxRetriesPerRequest: //Retries stuff,
      retryStrategy(times) {
        // retry strategy
      },
    }),
    host,
    port,
    ttl,
  } as CacheConfig;
});

Then we would load this config up into our application in a Common Module like this:

import { ConfigModule, ConfigService } from '@nestjs/config';
import { Module } from '@nestjs/common';
import { CacheModule } from '@nestjs/cache-manager';
import cacheConfig, { CACHE_CONFIG, CacheConfig } from '@config/cache.config';

@Module({
  imports: [
    ConfigModule.forRoot({
      isGlobal: true,
      load: [
        cacheConfig,
        // Other configs
      ],
    }),
    CacheModule.registerAsync({
      imports: [ConfigModule],
      useFactory: async (configService: ConfigService) =>
        configService.get<CacheConfig>(CACHE_CONFIG),
      inject: [ConfigService],
      isGlobal: true,
    }),
  ],
  exports: [CacheModule, ConfigModule],
})
export class CommonModule {}

What we ended up doing was utilizing Conditional Modules, and only loading up the Redis connection for when the system was not being tested. Here's what I mean:

import { ConditionalModule, ConfigModule, ConfigService } from '@nestjs/config';
import { Module } from '@nestjs/common';
import { CacheModule } from '@nestjs/cache-manager';
import cacheConfig, { CACHE_CONFIG, CacheConfig } from '@config/cache.config';

@Module({
  imports: [
    ConfigModule.forRoot({
      isGlobal: true,
      load: [
        // Other configs - Note that the cache config isn't loaded here at all anymore
      ],
    }),
    // Note - Using the in-memory cache for tests
    ConditionalModule.registerWhen(
      CacheModule.register({ isGlobal: true }),
      (env: NodeJS.ProcessEnv) => Boolean(env['TEST_ENVIRONMENT']),
    ),
    ConditionalModule.registerWhen(
      CacheModule.registerAsync({
        // Note - Cache config is loaded up here but only for this particular feature
        imports: [ConfigModule.forFeature(cacheConfig)],
        inject: [ConfigService],
        useFactory: async (configService: ConfigService) =>
          configService.get<CacheConfig>(CACHE_CONFIG),
        isGlobal: true,
      }),
      (env: NodeJS.ProcessEnv) => !Boolean(env['TEST_ENVIRONMENT']),
    ),
  ],
})
export class CommonModule {}

@shaynlink
Copy link

Using .duplicate() method for pub / sub can't properly exit and close open handle with jest too

close method

  /**
   * @returns {Promise<[undefined, undefined, undefined]>}
   */
  close() {
    return Promise.all([
      new Promise((resolve) => {
        this.redis.removeAllListeners();
        this.redis.once('end', resolve);
        this.redis.quit();
        this.redis.disconnect(false);
      }),
       new Promise((resolve) => {
         this.subscriber.subRedis.removeAllListeners();
         this.subscriber.subRedis.once('end', resolve);
         this.subscriber.subRedis.quit();
         this.subscriber.subRedis.disconnect(false);
      }),
      new Promise((resolve) => {
        this.publisher.subRedis.removeAllListeners();
        this.publisher.subRedis.once('end', resolve);
        this.publisher.subRedis.quit();
        this.publisher.subRedis.disconnect(false);
      })
    ])
  }

even use both method quit and disconnect (or just quit or disconnect) that seem not close handles

close method on jest

afterAll(async () => {
  return await instance.close()
});

error

Jest has detected the following 1 open handle potentially keeping Jest from exiting:

  ●  Timeout

      15 |      * @type {import('ioredis').Redis}
      16 |      */
    > 17 |     this.subRedis = this.instance.redis.duplicate();
         |                                         ^
      18 |
      19 |     /**
      20 |      * @type {Set<CommandBuilder>}

      at emitConnectEvent (node_modules/ioredis-mock/src/commands-utils/emitConnectEvent.js:2:11)
      at new _RedisMock (node_modules/ioredis-mock/src/index.js:115:7)
      at _RedisMock.duplicate (node_modules/ioredis-mock/src/index.js:178:18)
      at new duplicate (src/events/Subscriber.js:17:41)
      at new Instance (src/Instance.js:46:23)
      at Object.<anonymous> (tests/filter.test.mjs:10:14)
      ```

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests