Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spanner - noStatus - TypeError: requestStream.destroy is not a function #2356

Closed
Chipintoza opened this issue Jun 5, 2017 · 21 comments
Closed
Assignees
Labels
api: spanner Issues related to the Spanner API. priority: p0 Highest priority. Critical issue. P0 implies highest priority. type: bug Error or flaw in code with unintended results or allowing sub-optimal usage patterns.

Comments

@Chipintoza
Copy link

Chipintoza commented Jun 5, 2017

OS: Mac OSX 10.12.4
Node.js version: 6.9.3
npm version: 5.0.0
version: 0.4.4

After closing #2176 i run my data export in spanner. When a number of requests were in parallel the following error occurred several times.

noStatus - TypeError: requestStream.destroy is not a function
    at StreamProxy.<anonymous> (/Users/Chipintoza/GSS Projects/accounts.gss.ge/node_modules/@google-cloud/spanner/src/transaction.js:375:19)
    at emitOne (events.js:96:13)
    at StreamProxy.emit (events.js:188:7)
    at ClientReadableStream.<anonymous> (/Users/Chipintoza/GSS Projects/accounts.gss.ge/node_modules/google-gax/lib/streaming.js:130:17)
    at emitOne (events.js:96:13)
    at ClientReadableStream.emit (events.js:188:7)
    at ClientReadableStream._emitStatusIfDone (/Users/Chipintoza/GSS Projects/accounts.gss.ge/node_modules/grpc/src/node/src/client.js:204:12)
    at ClientReadableStream._receiveStatus (/Users/Chipintoza/GSS Projects/accounts.gss.ge/node_modules/grpc/src/node/src/client.js:180:8)
    at /Users/Chipintoza/GSS Projects/accounts.gss.ge/node_modules/grpc/src/node/src/client.js:592:14

After certain quantity of data export, the following error kept occurring constantly:

  • req.url - http://accounts.gss.local:9001/gss-api/environments/upsert
    InternalServerError: err.code: 8
  • err.message: Too many active sessions in database projects/spanner-gss-ge/instances/business-data/databases/system-development, limit is 30000. Increase the node count to allow more sessions.
  • err.status: undefined
  • err location: Environment.upsert\database.runTransaction((err, tx) => {...
  • data:
{
  "environmentId": "5e9d511f-e300-49d8-80ba-21c421f98b03",
  "isActive": true,
  "created": 1325376000000,
  "timestamp": 1496655116694,
  "modifierId": "system-id",
  "modifierName": "system",
  "name": "ka:ჩემი 'GSS გარემო' - [email protected]|en:My 'GSS Environment' - [email protected]|ru:Моя 'GSS Среда' - [email protected]",
  "databaseID": "39aa56bc-f926-4df2-afe9-619c1e527afe",
  "isUnderControl": false,
  "environmentSyncURL": "",
  "linkedGSSContactID": null,
  "linkedContactID": null,
  "inputLanguages": "en ka ru",
  "usersInputLanguages": "",
  "registrationCountry": "",
  "rsgeServiceUser": "",
  "rsgeServiceUserPassword": "",
  "mainCurrencySignification": null,
  "accountingCurrencySignifications": "",
  "domain": null,
  "gssEducationIsActive": false
}
  • apiResponse:

I stop my process and during 20 minute from console i can not run any queries... there was same error:

Too many active sessions in database projects/spanner-gss-ge/instances/business-data/databases/system-development, limit is 30000. Increase the node count to allow more sessions.

When will close sessions? look at pictures

screen shot 2017-06-05 at 13 55 19

screen shot 2017-06-05 at 13 55 27

I was wait around 40 minute and then delete database.
https://cloud.google.com/spanner/docs/limits
Sessions per database per node 10,000
What does it mean?
how longer do i need to wait session closing?

Thank you

@Chipintoza
Copy link
Author

As i understand from Handle deleted sessions

The Cloud Spanner database service can delete a session when the session is idle for more than 1 hour.

In my opinion, after exception - sessions stayed opened and is there any way to close them? (why had not they closed when i stoped the program?)

When i had only 1 node over session occurred after when i export around 4 000 users. With 3 Node it happened after 7 000 users.

I think client library does not close and manage sessions well, because the quantity of data exported is not too much... or maybe i need setup correctly poolOptions. please explain

Thank you

@stephenplusplus stephenplusplus added the api: spanner Issues related to the Spanner API. label Jun 5, 2017
@stephenplusplus
Copy link
Contributor

stephenplusplus commented Jun 5, 2017

Thank you for the detailed report!

@callmehiphop -- regarding this error:

TypeError: requestStream.destroy is not a function
- source

It's likely that the request stream returned from GAX doesn't have destroy(), and we'll have to just end() it. Will that resolve the other issues @Chipintoza brought up, or are they unrelated?

@stephenplusplus stephenplusplus added the type: bug Error or flaw in code with unintended results or allowing sub-optimal usage patterns. label Jun 5, 2017
@Chipintoza
Copy link
Author

I do not know are they related or not, maybe they are. The old mistake is not coming out, this means that now many retiring happens with new requests and sessions remains open.

And they remain open when after i restart app. And in order not to wait for 1 hour, I have to delete the database

Maybe sessions remains open anytime and it is revealed during many calls.

I'm doing my project and I hope that we will close together these important issues

thanks

@callmehiphop
Copy link
Contributor

@stephenplusplus If I understand correctly, the client crashes which leads to a bunch of sessions not getting deleted. There is not an RPC (I don't think at least) to retrieve a list of pre-existing sessions so if a users app crashes multiple times in a short period, it's possible that they could hit the backend cap.

@Chipintoza does that sound right? We have a method that will delete all the sessions the client is aware of (database#close), but I'm not sure if that will help you in this scenario.

@vkedia
Copy link
Contributor

vkedia commented Jun 5, 2017

One important question is how did the client library end up creating 30000 sessions in the first place? Is it that when the request fails, the session is not being released back to the pool. Also do we have a limit on the size of the pool?
From the report it is not clear to me whether the process was run multiple times, in which case one can see how so many sessions were created.

@Chipintoza
Copy link
Author

@callmehiphop my app does not crash multiply time i just run one time. during the process first a few errors was
noStatus - TypeError: requestStream.destroy is not a function.
After certain quantity of data exported (i mention Users table but during the process the related data was exported to the other tables to) because the sessions was not released they reached the maximum.
I run first time on 1 node and second time on 3 node.
Is there a way to log current opened sessions count?
This second issue is also important, if we can not manage pre-existing sessions that mean, there is possibility user can not run any query during 1 hour.
@vkedia i think same during the process session is not being released back to the pool.

@callmehiphop
Copy link
Contributor

The cap for sessions should be 100, so I'm a little confused about why that number gets that high as well. @Chipintoza do you have some sample code we could use to recreate this?

@Chipintoza
Copy link
Author

@callmehiphop i add you in my project.
https://github.com/GSS-LLC/accounts.gss.ge/invitations
I am importing data using
accounts.gss.ge/server/gss-api/
you can look at:
accounts.gss.ge/server/tables/system/

@Chipintoza
Copy link
Author

@callmehiphop by
accounts.gss.ge/working-js/spanner-create-database.js
you can create database.

@JoshFerge
Copy link
Contributor

JoshFerge commented Jun 9, 2017

@stephenplusplus I am also seeing this error in our production environment. let me know if there's any more info you need that would be helpful, or if there's a bad practice on the application that leads to this error, that would be helpful for me to know.

@Chipintoza
Copy link
Author

Chipintoza commented Jul 6, 2017

@stephenplusplus @vkedia I delate all @google-cloud/ all packages and update spanner to v.0.5.0
and run my data importe.

After certain quantity of data import, the following error kept occurring constantly:

noStatus - TypeError: requestStream.destroy is not a function
at StreamProxy. (/Users/Chipintoza/GSS Projects/accounts.gss.ge/node_modules/@gss-llc/database-utils.gss.ge/node_modules/@google-cloud/spanner/src/transaction.js:375:19)
at emitOne (events.js:96:13)
at StreamProxy.emit (events.js:188:7)
at ClientReadableStream. (/Users/Chipintoza/GSS Projects/accounts.gss.ge/node_modules/google-gax/lib/streaming.js:130:17)
at emitOne (events.js:96:13)
at ClientReadableStream.emit (events.js:188:7)
at ClientReadableStream._emitStatusIfDone (/Users/Chipintoza/GSS Projects/accounts.gss.ge/node_modules/grpc/src/node/src/client.js:212:12)
at ClientReadableStream._receiveStatus (/Users/Chipintoza/GSS Projects/accounts.gss.ge/node_modules/grpc/src/node/src/client.js:188:8)
at /Users/Chipintoza/GSS Projects/accounts.gss.ge/node_modules/grpc/src/node/src/client.js:600:14
h

and later:

req.url - http://accounts.gss.local:9001/gss-api/users/upsert
InternalServerError: err.code: 8
err.message: Too many active sessions in database projects/spanner-gss-ge/instances/development-data/databases/system-development, limit is 10000. Increase the node count to allow more sessions.
err.status: undefined
err location:
User.upsert\database.runTransaction((err, tx) => {...
data:
{"userId":"dde6f76d-e772-42ec-8175-820f7b2f26a4","isActive":true,"created":1352473006237,"timestamp":1499323049178,"modifierId":"system-id","modifierName":"system","eMail":"[email protected]","contactName":"ლელა პატარაია","mobile":"","webBrowserID":"35414b4d-bb09-4105-8d88-10c4c1b24bca","photoID":null,"gssGroupID":"b716aa80-4080-4e44-96ae-9de12d8a0c47","timeZoneID":"Georgian Standard Time","gssClientID":null,"purchasedServicesCasheExpiryDateString":"{"DC360B17-62E6-4956-A71E-1A6BD9943E55":{"e9e5b8c9-bc25-4925-9b86-329928c3583e":"28/02/2014"}}","note":"","environmentsIds":[],"domainEnvironmentID":null,"checkDomainServices":"","googleName":"","googlePictureUrl":"","googleProfile":"","facebookName":"","facebookPictureUrl":"","facebookProfile":"","signedInBy":"eMail","envLanguage":"ka"}
apiResponse:

at Function.dbUtils.getDsError (/Users/Chipintoza/GSS Projects/accounts.gss.ge/node_modules/@gss-llc/database-utils.gss.ge/index.js:213:16)
at error (/Users/Chipintoza/GSS Projects/accounts.gss.ge/server/tables/system/users-table.js:1057:64)
at database.runTransaction (/Users/Chipintoza/GSS Projects/accounts.gss.ge/server/tables/system/users-table.js:1073:27)
at /Users/Chipintoza/GSS Projects/accounts.gss.ge/node_modules/@gss-llc/database-utils.gss.ge/node_modules/@google-cloud/spanner/src/database.js:816:7
at /Users/Chipintoza/GSS Projects/accounts.gss.ge/node_modules/@gss-llc/database-utils.gss.ge/node_modules/@google-cloud/spanner/src/database.js:1040:7
at /Users/Chipintoza/GSS Projects/accounts.gss.ge/node_modules/@gss-llc/database-utils.gss.ge/node_modules/@google-cloud/spanner/src/session-pool.js:471:9
at /Users/Chipintoza/GSS Projects/accounts.gss.ge/node_modules/@gss-llc/database-utils.gss.ge/node_modules/@google-cloud/spanner/src/session-pool.js:230:9
at process._tickCallback (internal/process/next_tick.js:103:7)

It looks like the same problems :(

@vkedia vkedia added the priority: p0 Highest priority. Critical issue. P0 implies highest priority. label Jul 10, 2017
@vkedia
Copy link
Contributor

vkedia commented Jul 10, 2017

@danoscarmike @lukesneeringer @callmehiphop can we please get this resolved asap. This is causing customers to get locked out of spanner for upto an hour.

@Chipintoza
Copy link
Author

@bjwatson
Copy link

@callmehiphop @lukesneeringer What's the ETA for fixing this issue?

@callmehiphop
Copy link
Contributor

callmehiphop commented Jul 12, 2017

@bjwatson working on this right now.

For anyone experiencing a similar issue, you'll want to check that you're not creating multiple Database instances for the same Database. Everytime a Database instance is created it also creates a new session pool (hence why we see the number of sessions get really high).

The PR I'm working on is going to cache the pools, that way users don't have to be bothered with caching database instances and we won't spin up sessions unnecessarily.

@Chipintoza
Copy link
Author

@stephenplusplus with one database instance there is also much problems, i was waiting you to finish caching process.
When there is only one Database instance, it looks @google-cloud/spanner v0.5.0 not include PR 2358
and this error happened

http://accounts.gss.local:9001/gss-api/users/upsert
noStatus - TypeError: requestStream.destroy is not a function
    at StreamProxy.<anonymous> (/Users/Chipintoza/GSS Projects/accounts.gss.ge/node_modules/@gss-llc/database-utils.gss.ge/node_modules/@google-cloud/spanner/src/transaction.js:375:19)
    at emitOne (events.js:96:13)
    at StreamProxy.emit (events.js:188:7)
    at ClientReadableStream.<anonymous> (/Users/Chipintoza/GSS Projects/accounts.gss.ge/node_modules/google-gax/lib/streaming.js:130:17)
    at emitOne (events.js:96:13)
    at ClientReadableStream.emit (events.js:188:7)
    at ClientReadableStream._emitStatusIfDone (/Users/Chipintoza/GSS Projects/accounts.gss.ge/node_modules/grpc/src/node/src/client.js:212:12)
    at ClientReadableStream._receiveStatus (/Users/Chipintoza/GSS Projects/accounts.gss.ge/node_modules/grpc/src/node/src/client.js:188:8)
    at /Users/Chipintoza/GSS Projects/accounts.gss.ge/node_modules/grpc/src/node/src/client.js:600:14

because of this retry process not happened and many data is lost

when i update packages/spanner/src/transaction.js from this PR and run data import
new errors appears:

(node:38781) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 2): TypeError: Cannot read property 'run' of undefined
and
(node:38781) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 10): TypeError: Cannot set property 'beginTime_' of undefined
`
i can not catch file of those errors
and

noStatus - TypeError: this.stream.end is not a function
    at StreamProxy._onFinish (/Users/Chipintoza/GSS Projects/accounts.gss.ge/node_modules/google-gax/lib/streaming.js:103:17)
    at StreamProxy.<anonymous> (/Users/Chipintoza/GSS Projects/accounts.gss.ge/node_modules/google-gax/lib/streaming.js:76:10)
    at emitNone (events.js:86:13)
    at StreamProxy.emit (events.js:185:7)
    at finishMaybe (/Users/Chipintoza/GSS Projects/accounts.gss.ge/node_modules/readable-stream/lib/_stream_writable.js:503:14)
    at endWritable (/Users/Chipintoza/GSS Projects/accounts.gss.ge/node_modules/readable-stream/lib/_stream_writable.js:513:3)
    at StreamProxy.Writable.end (/Users/Chipintoza/GSS Projects/accounts.gss.ge/node_modules/readable-stream/lib/_stream_writable.js:483:41)
    at StreamProxy.<anonymous> (/Users/Chipintoza/GSS Projects/accounts.gss.ge/node_modules/@gss-llc/database-utils.gss.ge/node_modules/@google-cloud/spanner/src/transaction.js:375:19)
    at emitOne (events.js:96:13)
    at StreamProxy.emit (events.js:188:7)

It looks when Intensive update process runed many related errors happen and data is lost.

It need to look in overall. Retries happen only during one minute. is this correct? if spanner can not save data in one minute, will data be lost?

In other case, this client library does not fulfill basic functions of spanner (capability to process big data fast and correctly)

Please take more time to improve all of these as soon as possible.

Since 2017-04-02 i have been speaking about this issues, nothing improved :(

I am ready to assist you and spanner teams as i can.
Thank you

P.S.
@danoscarmike @lukesneeringer @callmehiphop @vkedia

@JoshFerge
Copy link
Contributor

We're also experiencing the TypeError: this.stream.end is not a function in our system now.

@callmehiphop
Copy link
Contributor

callmehiphop commented Jul 14, 2017

@JoshFerge do you have an error stack you could share?

Or possibly code to duplicate the error

@JoshFerge
Copy link
Contributor

JoshFerge commented Jul 14, 2017

here is the error stack. @callmehiphop

TypeError: this.stream.end is not a function
    at StreamProxy._onFinish (/usr/src/app/node_modules/google-gax/lib/streaming.js:103:17)
    at StreamProxy.<anonymous> (/usr/src/app/node_modules/google-gax/lib/streaming.js:76:10)
    at emitNone (events.js:105:13)
    at StreamProxy.emit (events.js:207:7)
    at finishMaybe (/usr/src/app/node_modules/readable-stream/lib/_stream_writable.js:607:14)
    at endWritable (/usr/src/app/node_modules/readable-stream/lib/_stream_writable.js:615:3)
    at StreamProxy.Writable.end (/usr/src/app/node_modules/readable-stream/lib/_stream_writable.js:571:41)
    at StreamProxy.Duplex._destroy (/usr/src/app/node_modules/readable-stream/lib/_stream_duplex.js:115:8)
    at StreamProxy.destroy (/usr/src/app/node_modules/readable-stream/lib/internal/streams/destroy.js:36:8)
    at StreamProxy.<anonymous> (/usr/src/app/node_modules/@google-cloud/spanner/src/transaction.js:375:19)

is this a seperate issue altogether?

@stephenplusplus
Copy link
Contributor

The final fix (we hope!) is out under @google-cloud/[email protected] -- please let us know if there are any issues.

@bjwatson
Copy link

@Chipintoza If there are other problems in #2356 (comment) that are still present in the 0.6.0 release, can you create separate issue(s) for them, and tag me and @lukesneeringer. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api: spanner Issues related to the Spanner API. priority: p0 Highest priority. Critical issue. P0 implies highest priority. type: bug Error or flaw in code with unintended results or allowing sub-optimal usage patterns.
Projects
None yet
Development

No branches or pull requests

6 participants