You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Proposal:
You can specify the max body size of the influxdb and the client split the data points into correct batches.
Current behavior:
If you try to sent a bigger batch of datapoints with client.write_points(points).
You get the error message:
<head><title>413 Request Entity Too Large</title></head>
<body>
<center><h1>413 Request Entity Too Large</h1></center>
<hr><center>nginx/1.17.4</center>
</body>
</html>
This already happen if the plain data points only have 1.500.000 bytes.
Desired behavior:
The client check the size of the body and split the data into different requests if necessary.
Use case:
We are not the owner of the influxdb instance and can not change the max-body-size.
But it's also hard to calculate the bytes of the body, because the body is created in this library.
Further the byte size of our data points are very different.
So it's very hard to find a good batch size for all our datapoints.
The text was updated successfully, but these errors were encountered:
Currently We don't want to include this in our library because it is not that common. I've prepared the following example to show you how to achieve the required functionality by RxPy:
Proposal:
You can specify the max body size of the influxdb and the client split the data points into correct batches.
Current behavior:
If you try to sent a bigger batch of datapoints with
client.write_points(points)
.You get the error message:
This already happen if the plain data points only have 1.500.000 bytes.
Desired behavior:
The client check the size of the body and split the data into different requests if necessary.
Use case:
We are not the owner of the influxdb instance and can not change the max-body-size.
But it's also hard to calculate the bytes of the body, because the body is created in this library.
Further the byte size of our data points are very different.
So it's very hard to find a good batch size for all our datapoints.
The text was updated successfully, but these errors were encountered: