Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add influx push endpoint to mimir #10153

Merged
merged 57 commits into from
Jan 17, 2025
Merged
Show file tree
Hide file tree
Changes from 13 commits
Commits
Show all changes
57 commits
Select commit Hold shift + click to select a range
444d34f
olegs base commits from #1971
alexgreenbank Dec 6, 2024
adb376a
move top level influx files
alexgreenbank Dec 6, 2024
3db179f
latest wip
alexgreenbank Dec 9, 2024
295fa8d
still WIP but better, still need to move to parserFunc() style
alexgreenbank Dec 10, 2024
d2897e3
it builds!
alexgreenbank Dec 10, 2024
c2fb679
tweaks and add span logging
alexgreenbank Dec 11, 2024
593b2e2
more todo
alexgreenbank Dec 11, 2024
31bf23d
further tweaks
alexgreenbank Dec 12, 2024
2487a88
some fixes to tests
alexgreenbank Dec 12, 2024
03e8e3c
rejigged error handling, tests passing
alexgreenbank Dec 12, 2024
0bd8da8
add vendored influxdb code
alexgreenbank Dec 12, 2024
4a76a11
lint
alexgreenbank Dec 13, 2024
066c009
go mod sum vendor/modules.txt
alexgreenbank Dec 13, 2024
bec3a26
add a metric, add tenant info, other tweaks
alexgreenbank Dec 17, 2024
ac51def
various rework, still WIP
alexgreenbank Dec 17, 2024
3a57dc6
propagate bytesRead down to caller and log and histogram
alexgreenbank Dec 17, 2024
92379e4
remove comment now dealt with
alexgreenbank Dec 17, 2024
d44c71d
add defaults in error handling
alexgreenbank Dec 17, 2024
591389e
Add note to docs about experimental Influx flag
alexgreenbank Dec 17, 2024
afbc357
Note influx endpoint as experimental too
alexgreenbank Dec 17, 2024
847bcb9
test for specific errors received
alexgreenbank Dec 17, 2024
320c467
bolster parser tests
alexgreenbank Dec 17, 2024
730a7c3
Use literal chars rather than ascii codes
alexgreenbank Dec 17, 2024
de27d4b
remove unnecessary cast to int()
alexgreenbank Dec 18, 2024
af3def1
use mimirpb.PreallocTimeseries in influx parser
alexgreenbank Dec 19, 2024
d65b3a5
remove unnecessary tryUnwrap()
alexgreenbank Dec 19, 2024
9d94276
Work on byteslice rather than chars
alexgreenbank Dec 19, 2024
258fe0d
yoloString for label value as push code does not keep references to s…
alexgreenbank Dec 19, 2024
e5252d4
update go.sum
alexgreenbank Dec 19, 2024
f86691b
gah go.sum
alexgreenbank Dec 19, 2024
32cc156
oops, missed removal of paramter to InfluxHandler()
alexgreenbank Dec 19, 2024
c798360
wrong metrics incremented
alexgreenbank Dec 19, 2024
8d4e7ca
lint
alexgreenbank Dec 19, 2024
e915764
lint
alexgreenbank Dec 19, 2024
013b3d6
go mod tidy && go mod vendor
alexgreenbank Dec 19, 2024
3c5a166
go.sum conflict
alexgreenbank Dec 19, 2024
773722f
merge latest main
alexgreenbank Jan 2, 2025
6143162
make doc
alexgreenbank Jan 2, 2025
ac4e491
Merge branch 'main' into alexg/influx-push-handler
alexgreenbank Jan 7, 2025
767695a
make influx config hidden/experimental
alexgreenbank Jan 9, 2025
419d327
fix byteslice handling in replaceInvalidChars()
alexgreenbank Jan 10, 2025
9e9e117
remove unnecessary TODOs
alexgreenbank Jan 10, 2025
0da4b8f
influx: happy path e2e test
alexgreenbank Jan 10, 2025
c470fb3
lint
alexgreenbank Jan 10, 2025
c44d321
consolidate logging
alexgreenbank Jan 10, 2025
537fa37
CHANGELOG
alexgreenbank Jan 10, 2025
14bae20
about-versioning.md
alexgreenbank Jan 10, 2025
b951127
Merge branch 'main' into alexg/influx-push-handler
alexgreenbank Jan 10, 2025
9d035f5
merge main
alexgreenbank Jan 10, 2025
c31c191
Merge branch 'alexg/influx-push-handler' of github.com:grafana/mimir …
alexgreenbank Jan 10, 2025
0872e6a
Update pkg/distributor/influxpush/parser.go
alexgreenbank Jan 14, 2025
de92ac5
Update pkg/distributor/influxpush/parser.go
alexgreenbank Jan 14, 2025
0e6cea6
Update pkg/distributor/influxpush/parser.go
alexgreenbank Jan 14, 2025
8afdd9b
fix parsing string replacing code
alexgreenbank Jan 14, 2025
7018f72
fix merge conflicts
alexgreenbank Jan 16, 2025
d81ac52
fix nits
alexgreenbank Jan 17, 2025
332576e
Merge branch 'main' into alexg/influx-push-handler
alexgreenbank Jan 17, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ require (
github.com/grafana/dskit v0.0.0-20241125123840-77bb9ddddb0c
github.com/grafana/e2e v0.1.2-0.20240118170847-db90b84177fc
github.com/hashicorp/golang-lru v1.0.2 // indirect
github.com/influxdata/influxdb/v2 v2.7.11
github.com/json-iterator/go v1.1.12
github.com/minio/minio-go/v7 v7.0.81
github.com/mitchellh/go-wordwrap v1.0.1
Expand Down
2 changes: 2 additions & 0 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -1390,6 +1390,8 @@ github.com/imdario/mergo v0.3.16 h1:wwQJbIsHYGMUyLSPrEq1CT16AhnhNJQ51+4fdHUnCl4=
github.com/imdario/mergo v0.3.16/go.mod h1:WBLT9ZmE3lPoWsEzCh9LPo3TiwVN+ZKEjmz+hD27ysY=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/influxdata/influxdb/v2 v2.7.11 h1:qs9qr5hsuFrlTiBtr5lBrALbQ2dHAanf21fBLlLpKww=
github.com/influxdata/influxdb/v2 v2.7.11/go.mod h1:zNOyzQy6WbfvGi1CK1aJ2W8khOq9+Gdsj8yLj8bHHqg=
github.com/ionos-cloud/sdk-go/v6 v6.2.1 h1:mxxN+frNVmbFrmmFfXnBC3g2USYJrl6mc1LW2iNYbFY=
github.com/ionos-cloud/sdk-go/v6 v6.2.1/go.mod h1:SXrO9OGyWjd2rZhAhEpdYN6VUAODzzqRdqA9BCviQtI=
github.com/jessevdk/go-flags v1.5.0 h1:1jKYvbxEjfUl0fmqTCOfonvskHHXMjBySTLW4y9LFvc=
Expand Down
5 changes: 5 additions & 0 deletions pkg/api/api.go
Original file line number Diff line number Diff line change
Expand Up @@ -259,6 +259,7 @@ func (a *API) RegisterRuntimeConfig(runtimeConfigHandler http.HandlerFunc, userL

const PrometheusPushEndpoint = "/api/v1/push"
const OTLPPushEndpoint = "/otlp/v1/metrics"
const InfluxPushEndpoint = "/api/v1/influx/push"
colega marked this conversation as resolved.
Show resolved Hide resolved

// RegisterDistributor registers the endpoints associated with the distributor.
func (a *API) RegisterDistributor(d *distributor.Distributor, pushConfig distributor.Config, reg prometheus.Registerer, limits *validation.Overrides) {
Expand All @@ -268,6 +269,10 @@ func (a *API) RegisterDistributor(d *distributor.Distributor, pushConfig distrib
pushConfig.MaxRecvMsgSize, d.RequestBufferPool, a.sourceIPs, a.cfg.SkipLabelNameValidationHeader,
a.cfg.SkipLabelCountValidationHeader, limits, pushConfig.RetryConfig, d.PushWithMiddlewares, d.PushMetrics, a.logger,
), true, false, "POST")
// TODO(alexg): hidden behind a featureflag or experimental config option?
a.RegisterRoute(InfluxPushEndpoint, distributor.InfluxHandler(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think a feature flag for this is needed. We can just state in the docs (about-versioning.md) that the endpoint is experimental.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, done.

pushConfig.MaxInfluxRequestSize, d.RequestBufferPool, a.sourceIPs, pushConfig.RetryConfig, d.PushWithMiddlewares, d.PushMetrics, reg, a.logger,
), true, false, "POST")
a.RegisterRoute(OTLPPushEndpoint, distributor.OTLPHandler(
pushConfig.MaxOTLPRequestSize, d.RequestBufferPool, a.sourceIPs, limits, pushConfig.OTelResourceAttributePromotionConfig,
pushConfig.RetryConfig, d.PushWithMiddlewares, d.PushMetrics, reg, a.logger,
Expand Down
6 changes: 5 additions & 1 deletion pkg/distributor/distributor.go
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,8 @@ const (
// metaLabelTenantID is the name of the metric_relabel_configs label with tenant ID.
metaLabelTenantID = model.MetaLabelPrefix + "tenant_id"

maxOTLPRequestSizeFlag = "distributor.max-otlp-request-size"
maxOTLPRequestSizeFlag = "distributor.max-otlp-request-size"
maxInfluxRequestSizeFlag = "distributor.max-influx-request-size"

instanceIngestionRateTickInterval = time.Second

Expand Down Expand Up @@ -200,6 +201,7 @@ type Config struct {

MaxRecvMsgSize int `yaml:"max_recv_msg_size" category:"advanced"`
MaxOTLPRequestSize int `yaml:"max_otlp_request_size" category:"experimental"`
MaxInfluxRequestSize int `yaml:"max_influx_request_size" category:"experimental"`
MaxRequestPoolBufferSize int `yaml:"max_request_pool_buffer_size" category:"experimental"`
RemoteTimeout time.Duration `yaml:"remote_timeout" category:"advanced"`

Expand Down Expand Up @@ -255,6 +257,7 @@ func (cfg *Config) RegisterFlags(f *flag.FlagSet, logger log.Logger) {

f.IntVar(&cfg.MaxRecvMsgSize, "distributor.max-recv-msg-size", 100<<20, "Max message size in bytes that the distributors will accept for incoming push requests to the remote write API. If exceeded, the request will be rejected.")
f.IntVar(&cfg.MaxOTLPRequestSize, maxOTLPRequestSizeFlag, 100<<20, "Maximum OTLP request size in bytes that the distributors accept. Requests exceeding this limit are rejected.")
f.IntVar(&cfg.MaxInfluxRequestSize, maxInfluxRequestSizeFlag, 100<<20, "Maximum Influx request size in bytes that the distributors accept. Requests exceeding this limit are rejected.")
f.IntVar(&cfg.MaxRequestPoolBufferSize, "distributor.max-request-pool-buffer-size", 0, "Max size of the pooled buffers used for marshaling write requests. If 0, no max size is enforced.")
f.DurationVar(&cfg.RemoteTimeout, "distributor.remote-timeout", 2*time.Second, "Timeout for downstream ingesters.")
f.BoolVar(&cfg.WriteRequestsBufferPoolingEnabled, "distributor.write-requests-buffer-pooling-enabled", true, "Enable pooling of buffers used for marshaling write requests.")
Expand Down Expand Up @@ -282,6 +285,7 @@ const (
)

type PushMetrics struct {
// TODO(alexg): influx metrics here?
otlpRequestCounter *prometheus.CounterVec
uncompressedBodySize *prometheus.HistogramVec
}
Expand Down
177 changes: 177 additions & 0 deletions pkg/distributor/influx.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,177 @@
// SPDX-License-Identifier: AGPL-3.0-only

package distributor

import (
"context"
"errors"
"net/http"

"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/grafana/dskit/grpcutil"
"github.com/grafana/dskit/httpgrpc"
"github.com/grafana/dskit/middleware"
io2 "github.com/influxdata/influxdb/v2/kit/io"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
io2 "github.com/influxdata/influxdb/v2/kit/io"
influxio "github.com/influxdata/influxdb/v2/kit/io"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, fixed in next push.

"github.com/prometheus/client_golang/prometheus"

"github.com/grafana/mimir/pkg/distributor/influxpush"
"github.com/grafana/mimir/pkg/mimirpb"
"github.com/grafana/mimir/pkg/util"
utillog "github.com/grafana/mimir/pkg/util/log"
"github.com/grafana/mimir/pkg/util/spanlogger"
)

func parser(ctx context.Context, r *http.Request, maxSize int, _ *util.RequestBuffers, req *mimirpb.PreallocWriteRequest, logger log.Logger) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function lives in pkg/distributor, I would say it's a little bit pretentious to take the name parser for this :D

How about influxRequestParser?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, fixed in next push.

spanLogger, ctx := spanlogger.NewWithLogger(ctx, logger, "Distributor.InfluxHandler.decodeAndConvert")
defer spanLogger.Span.Finish()

spanLogger.SetTag("content_type", r.Header.Get("Content-Type"))
spanLogger.SetTag("content_encoding", r.Header.Get("Content-Encoding"))
spanLogger.SetTag("content_length", r.ContentLength)

ts, bytesRead, err := influxpush.ParseInfluxLineReader(ctx, r, maxSize)
// TODO(alexg): one argument for splitting up the decoding and conversion is to facilitate granular timings
// right now since ParseInfluxLineReader() does both
// The otel version decodes the whole input and then processes it, the existing Influx code parses each line as it
// decodes it.
level.Debug(spanLogger).Log(
"msg", "decodeAndConvert complete",
"bytesRead", bytesRead,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Opinionated style nit: I don't think we need 4 lines for this debug log.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in next push.

if err != nil {
level.Error(logger).Log("err", err.Error())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this logger have all the required context? Can you also add some context here about what was going on when this happened? I'm scared of finding this log:

ts=2024-12-17 err="unexpected EOF"

Also, nit, .Error() call is not needed, just pass err.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, fixed in next push.

// TODO(alexg): need to pass on the http.StatusBadRequest
// http.Error(w, err.Error(), http.StatusBadRequest)
return err
}

// Sigh, a write API optimisation needs me to jump through hoops.
pts := make([]mimirpb.PreallocTimeseries, 0, len(ts))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should use mimirpb.PreallocTimeseriesSliceFromPool() instead of creating a new slice every time.

Also, IMO it would make sense to change influxpush.ParseInfluxLineReader to return []PreallocTimeseries instead of []Timesries because that's what we deal with later.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Fixed in latest push.

for i := range ts {
pts = append(pts, mimirpb.PreallocTimeseries{
TimeSeries: &ts[i],
})
}

level.Debug(spanLogger).Log(
"msg", "Influx to Prometheus conversion complete",
"metric_count", len(ts),
)

req.Timeseries = pts
return nil
}

// InfluxHandler is a http.Handler which accepts Influx Line protocol and converts it to WriteRequests.
func InfluxHandler(
maxRecvMsgSize int,
requestBufferPool util.Pool,
sourceIPs *middleware.SourceIPExtractor,
retryCfg RetryConfig,
push PushFunc,
_ *PushMetrics, // TODO(alexg) add pushMetrics()
_ prometheus.Registerer, // TODO(alexg): add reg
logger log.Logger,
) http.Handler {
//TODO(alexg): mirror otel.go implementation where we do decoding here rather than in parser() func?
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
logger := utillog.WithContext(ctx, logger)
if sourceIPs != nil {
source := sourceIPs.Get(r)
if source != "" {
logger = utillog.WithSourceIPs(source, logger)
}
}

supplier := func() (*mimirpb.WriteRequest, func(), error) {
rb := util.NewRequestBuffers(requestBufferPool)
var req mimirpb.PreallocWriteRequest

if err := parser(ctx, r, maxRecvMsgSize, rb, &req, logger); err != nil {
// TODO(alexg): Do we even need httpgrpc here?
// Check for httpgrpc error, default to client error if parsing failed
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see any httpgrpc errors being returned by parser

Copy link
Contributor Author

@alexgreenbank alexgreenbank Dec 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, should have got rid of that one. Will resolve once I work out best way to wrap existing error in the StatusBadRequest.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, httpgprc is used to smuggle both the http status code and the error message out of the supplier() function. I've removed the misleading comment.

if _, ok := httpgrpc.HTTPResponseFromError(err); !ok {
err = httpgrpc.Error(http.StatusBadRequest, err.Error())
}

rb.CleanUp()
return nil, nil, err
}

cleanup := func() {
mimirpb.ReuseSlice(req.Timeseries)
rb.CleanUp()
}
return &req.WriteRequest, cleanup, nil
}
req := newRequest(supplier)
// https://docs.influxdata.com/influxdb/cloud/api/v2/#tag/Response-codes
if err := push(ctx, req); err != nil {
if errors.Is(err, context.Canceled) {
level.Warn(logger).Log("msg", "push request canceled", "err", err)
w.WriteHeader(statusClientClosedRequest)
return
}
if errors.Is(err, io2.ErrReadLimitExceeded) {
// TODO(alexg): One thing we have seen in the past is that telegraf clients send a batch of data
// if it is too big they should respond to the 413 below, but if a client doesn't understand this
// it just sends the next batch that is even bigger. In the past this has had to be dealt with by
// adding rate limits to drop the payloads.
level.Warn(logger).Log("msg", "request too large", "err", err)
// TODO(alexg): max size and bytes received in error?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would definitely help customer to debug.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Bubbled bytesRead down from the parsing function so that it is available for both metrics/histograms and for logging here.

w.WriteHeader(http.StatusRequestEntityTooLarge)
return
}
// From: https://github.com/grafana/influx2cortex/blob/main/pkg/influx/errors.go

var (
httpCode int
errorMsg string
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While the code below isn't extremely complex, it's still a couple of ifs, so I'd write here some sane defaults.

Suggested change
var (
httpCode int
errorMsg string
)
httpCode := http.StatusInternalServerError
errorMsg := "unknown error"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. Done!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Had to add the var definitions back in as the linter was complaining that the initial assignments/defaults were unused.

if st, ok := grpcutil.ErrorToStatus(err); ok {
// TODO(alexg): Hmm, still needed?
// This code is needed for a correct handling of errors returned by the supplier function.
// These errors are created by using the httpgrpc package.
httpCode = int(st.Code())
errorMsg = st.Message()
} else {
var distributorErr Error
errorMsg = err.Error()
if errors.Is(err, context.DeadlineExceeded) || !errors.As(err, &distributorErr) {
httpCode = http.StatusServiceUnavailable
} else {
httpCode = errorCauseToHTTPStatusCode(distributorErr.Cause(), false)
}
}
if httpCode != 202 {
// This error message is consistent with error message in Prometheus remote-write handler, and ingester's ingest-storage pushToStorage method.
msgs := []interface{}{"msg", "detected an error while ingesting Influx metrics request (the request may have been partially ingested)", "httpCode", httpCode, "err", err}
if httpCode/100 == 4 {
// TODO(alexg): what is this?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Logs with insight=true are visible to our Grafana Cloud customers.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aha, TIL! Will add it back in as that could be useful.

msgs = append(msgs, "insight", true)
}
level.Error(logger).Log(msgs...)
}
if httpCode < 500 {
level.Info(logger).Log("msg", errorMsg, "response_code", httpCode, "err", tryUnwrap(err))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need to tryUnwrap? Wrapping errors provides details about what went wrong, this is literally "tryToRemoveDetails".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

} else if httpCode >= 500 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
} else if httpCode >= 500 {
} else {

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

level.Warn(logger).Log("msg", errorMsg, "response_code", httpCode, "err", tryUnwrap(err))
}
addHeaders(w, err, r, httpCode, retryCfg)
w.WriteHeader(httpCode)
} else {
w.WriteHeader(http.StatusNoContent) // Needed for Telegraf, otherwise it tries to marshal JSON and considers the write a failure.
}
})
}

// Imported from: https://github.com/grafana/influx2cortex/blob/main/pkg/influx/errors.go

func tryUnwrap(err error) error {
if wrapped, ok := err.(interface{ Unwrap() error }); ok {
return wrapped.Unwrap()
}
return err
}
147 changes: 147 additions & 0 deletions pkg/distributor/influx_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
// SPDX-License-Identifier: AGPL-3.0-only

package distributor

import (
"bytes"
"context"
"net/http"
"net/http/httptest"
"testing"

"github.com/go-kit/log"
io2 "github.com/influxdata/influxdb/v2/kit/io"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
io2 "github.com/influxdata/influxdb/v2/kit/io"
influxio "github.com/influxdata/influxdb/v2/kit/io"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

"github.com/stretchr/testify/assert"

"github.com/grafana/mimir/pkg/mimirpb"
)

func TestInfluxHandleSeriesPush(t *testing.T) {
defaultExpectedWriteRequest := &mimirpb.WriteRequest{
Timeseries: []mimirpb.PreallocTimeseries{
{
TimeSeries: &mimirpb.TimeSeries{
Labels: []mimirpb.LabelAdapter{
{Name: "__mimir_source__", Value: "influx"},
{Name: "__name__", Value: "measurement_f1"},
{Name: "t1", Value: "v1"},
},
Samples: []mimirpb.Sample{
{Value: 2, TimestampMs: 1465839830100},
},
},
},
},
}

tests := []struct {
name string
url string
data string
expectedCode int
push func(t *testing.T) PushFunc
maxRequestSizeBytes int
}{
{
name: "POST",
url: "/write",
data: "measurement,t1=v1 f1=2 1465839830100400200",
expectedCode: http.StatusNoContent,
push: func(t *testing.T) PushFunc {
return func(_ context.Context, pushReq *Request) error {
req, err := pushReq.WriteRequest()
assert.Equal(t, defaultExpectedWriteRequest, req)
assert.Nil(t, err)
return err
}
},
maxRequestSizeBytes: 1 << 20,
},
{
name: "POST with precision",
url: "/write?precision=ns",
data: "measurement,t1=v1 f1=2 1465839830100400200",
expectedCode: http.StatusNoContent,
push: func(t *testing.T) PushFunc {
return func(_ context.Context, pushReq *Request) error {
req, err := pushReq.WriteRequest()
assert.Equal(t, defaultExpectedWriteRequest, req)
assert.Nil(t, err)
return err
}
},
maxRequestSizeBytes: 1 << 20,
},
{
name: "invalid parsing error handling",
url: "/write",
data: "measurement,t1=v1 f1= 1465839830100400200",
expectedCode: http.StatusBadRequest,
push: func(t *testing.T) PushFunc {
return func(_ context.Context, pushReq *Request) error {
req, err := pushReq.WriteRequest()
assert.Nil(t, req)
// TODO(alexg): assert on specific err
// assert.NoError(t) // reminder to fix
return err
}
},
maxRequestSizeBytes: 1 << 20,
},
{
name: "invalid query params",
url: "/write?precision=?",
data: "measurement,t1=v1 f1=2 1465839830100400200",
expectedCode: http.StatusBadRequest,
push: func(t *testing.T) PushFunc {
// return func(ctx context.Context, req *mimirpb.WriteRequest) error {
return func(_ context.Context, pushReq *Request) error {
req, err := pushReq.WriteRequest()
assert.Nil(t, req)
// TODO(alexg): assert on specific err
// assert.NoError(t, err) // reminder to fix
return err
}
},
maxRequestSizeBytes: 1 << 20,
},
{
name: "internal server error",
url: "/write",
data: "measurement,t1=v1 f1=2 1465839830100400200",
expectedCode: http.StatusServiceUnavailable,
push: func(t *testing.T) PushFunc {
return func(_ context.Context, _ *Request) error {
assert.Error(t, context.DeadlineExceeded)
return context.DeadlineExceeded
}
},
maxRequestSizeBytes: 1 << 20,
},
{
name: "max batch size violated",
url: "/write",
data: "measurement,t1=v1 f1=2 0123456789",
expectedCode: http.StatusBadRequest,
push: func(t *testing.T) PushFunc {
return func(_ context.Context, pushReq *Request) error {
req, err := pushReq.WriteRequest()
assert.Nil(t, req)
assert.Error(t, io2.ErrReadLimitExceeded)
return err
}
},
maxRequestSizeBytes: 10,
},
}

for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
handler := InfluxHandler(tt.maxRequestSizeBytes, nil, nil, RetryConfig{}, tt.push(t), nil, nil, log.NewNopLogger())
req := httptest.NewRequest("POST", tt.url, bytes.NewReader([]byte(tt.data)))
rec := httptest.NewRecorder()
handler.ServeHTTP(rec, req)
assert.Equal(t, tt.expectedCode, rec.Code)
})
}
}
Loading
Loading