-
Notifications
You must be signed in to change notification settings - Fork 603
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spanner - noStatus - TypeError: requestStream.destroy is not a function #2356
Comments
As i understand from Handle deleted sessions
In my opinion, after exception - sessions stayed opened and is there any way to close them? (why had not they closed when i stoped the program?) When i had only 1 node over session occurred after when i export around 4 000 users. With 3 Node it happened after 7 000 users. I think client library does not close and manage sessions well, because the quantity of data exported is not too much... or maybe i need setup correctly poolOptions. please explain Thank you |
Thank you for the detailed report! @callmehiphop -- regarding this error:
It's likely that the request stream returned from GAX doesn't have |
I do not know are they related or not, maybe they are. The old mistake is not coming out, this means that now many retiring happens with new requests and sessions remains open. And they remain open when after i restart app. And in order not to wait for 1 hour, I have to delete the database Maybe sessions remains open anytime and it is revealed during many calls. I'm doing my project and I hope that we will close together these important issues thanks |
@stephenplusplus If I understand correctly, the client crashes which leads to a bunch of sessions not getting deleted. There is not an RPC (I don't think at least) to retrieve a list of pre-existing sessions so if a users app crashes multiple times in a short period, it's possible that they could hit the backend cap. @Chipintoza does that sound right? We have a method that will delete all the sessions the client is aware of (database#close), but I'm not sure if that will help you in this scenario. |
One important question is how did the client library end up creating 30000 sessions in the first place? Is it that when the request fails, the session is not being released back to the pool. Also do we have a limit on the size of the pool? |
@callmehiphop my app does not crash multiply time i just run one time. during the process first a few errors was |
The cap for sessions should be 100, so I'm a little confused about why that number gets that high as well. @Chipintoza do you have some sample code we could use to recreate this? |
@callmehiphop i add you in my project. |
@callmehiphop by |
@stephenplusplus I am also seeing this error in our production environment. let me know if there's any more info you need that would be helpful, or if there's a bad practice on the application that leads to this error, that would be helpful for me to know. |
@stephenplusplus @vkedia I delate all @google-cloud/ all packages and update spanner to v.0.5.0 After certain quantity of data import, the following error kept occurring constantly:
and later:
It looks like the same problems :( |
@danoscarmike @lukesneeringer @callmehiphop can we please get this resolved asap. This is causing customers to get locked out of spanner for upto an hour. |
@callmehiphop @lukesneeringer What's the ETA for fixing this issue? |
@bjwatson working on this right now. For anyone experiencing a similar issue, you'll want to check that you're not creating multiple Database instances for the same Database. Everytime a Database instance is created it also creates a new session pool (hence why we see the number of sessions get really high). The PR I'm working on is going to cache the pools, that way users don't have to be bothered with caching database instances and we won't spin up sessions unnecessarily. |
@stephenplusplus with one database instance there is also much problems, i was waiting you to finish caching process.
because of this retry process not happened and many data is lost when i update packages/spanner/src/transaction.js from this PR and run data import
It looks when Intensive update process runed many related errors happen and data is lost. It need to look in overall. Retries happen only during one minute. is this correct? if spanner can not save data in one minute, will data be lost? In other case, this client library does not fulfill basic functions of spanner (capability to process big data fast and correctly) Please take more time to improve all of these as soon as possible. Since 2017-04-02 i have been speaking about this issues, nothing improved :( I am ready to assist you and spanner teams as i can. |
We're also experiencing the |
@JoshFerge do you have an error stack you could share? Or possibly code to duplicate the error |
here is the error stack. @callmehiphop
is this a seperate issue altogether? |
The final fix (we hope!) is out under |
@Chipintoza If there are other problems in #2356 (comment) that are still present in the |
OS: Mac OSX 10.12.4
Node.js version: 6.9.3
npm version: 5.0.0
version: 0.4.4
After closing #2176 i run my data export in spanner. When a number of requests were in parallel the following error occurred several times.
After certain quantity of data export, the following error kept occurring constantly:
InternalServerError: err.code: 8
I stop my process and during 20 minute from console i can not run any queries... there was same error:
When will close sessions? look at pictures
I was wait around 40 minute and then delete database.
https://cloud.google.com/spanner/docs/limits
Sessions per database per node 10,000
What does it mean?
how longer do i need to wait session closing?
Thank you
The text was updated successfully, but these errors were encountered: