-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Log export fails with '413 Request Entity Too Large' status code #16834
Comments
I initially had a suspicion that this could've been caused by a couple of recent changes we did to the application:
However, I used a separate sample application here that I use for testing purposes and did the exact same steps with it. Then, I added the exact same service information that I added to the original application, and I can see the traces in Datadog. I'm completely lost now as to what could be the root cause of this. I'll try to disable instrumentations one by one to see if that might be related in any way. |
Pinging code owners for exporter/datadog: @KSerrania @mx-psi @gbbr @knusbaum @amenasria @dineshg13. See Adding Labels via Comments if you do not have permissions to add labels yourself. |
@julealgon can you please try with the latest |
@dineshg13 I will try updating today along with a few other tests. However, are you sure this was fixed as part of #16380? I actually did find that issue before opening this one, but they have different HTTP status codes. Looks like that one is talking about the URL, while mine is caused by payload size. |
@dineshg13 after updating to latest (0.67.0), I'm now not seeing any errors in the collector console but no logs are coming at all from the application. If I push logs from my local environment instead, I see them as before. I have no idea what is going on here. I'm pushing from local using the exact same application/version as I'm using in Azure. I'm now wondering if this is something about the Azure .NET7 AppService environment that is causing the issue. Just in case, I'll try updating every OTEL .NET library to their latest beta versions and redeploy the app. |
Ok, after further testing, I can confirm this does not seem to be an Azure-specific connection issue: after adding a It appears that scope information is completely missing however. All the logs inside the scope are intact, but the scope itself (the tracing main data) is not there so it does not exist in Datadog as well. Here is the output I saw from making a single request to my application (I've redacted some of the info): {
"resourceLogs": [{
"resource": {
"attributes": [{
"key": "service.name",
"value": {
"stringValue": "REDACTED"
}
}, {
"key": "service.namespace",
"value": {
"stringValue": "REDACTED"
}
}, {
"key": "service.instance.id",
"value": {
"stringValue": "5e334c5e-5ef0-4c8c-80f4-ebf9a93d2d8b"
}
}, {
"key": "telemetry.sdk.name",
"value": {
"stringValue": "opentelemetry"
}
}, {
"key": "telemetry.sdk.language",
"value": {
"stringValue": "dotnet"
}
}, {
"key": "telemetry.sdk.version",
"value": {
"stringValue": "1.3.1.622"
}
}, {
"key": "deployment.environment",
"value": {
"stringValue": "beta"
}
}
]
},
"scopeLogs": [{
"scope": {},
"logRecords": [{
"timeUnixNano": "1670858470630743200",
"severityNumber": 9,
"severityText": "Information",
"body": {
"stringValue": "IDX10242: Security token: '[PII of type 'System.String' is hidden. For more details, see https://aka.ms/IdentityModel/PII.]' has a valid signature."
},
"attributes": [{
"key": "dotnet.ilogger.category",
"value": {
"stringValue": "Microsoft.IdentityModel.LoggingExtensions.IdentityLoggerAdapter"
}
}
],
"traceId": "ddaa4e9c5a223c774e0b3817d88d173e",
"spanId": "86430313a5338a30"
}, {
"timeUnixNano": "1670858470630840500",
"severityNumber": 9,
"severityText": "Information",
"body": {
"stringValue": "IDX10239: Lifetime of the token is valid."
},
"attributes": [{
"key": "dotnet.ilogger.category",
"value": {
"stringValue": "Microsoft.IdentityModel.LoggingExtensions.IdentityLoggerAdapter"
}
}
],
"traceId": "ddaa4e9c5a223c774e0b3817d88d173e",
"spanId": "86430313a5338a30"
}, {
"timeUnixNano": "1670858470630929700",
"severityNumber": 9,
"severityText": "Information",
"body": {
"stringValue": "IDX10234: Audience Validated.Audience: 'api://REDACTED'"
},
"attributes": [{
"key": "dotnet.ilogger.category",
"value": {
"stringValue": "Microsoft.IdentityModel.LoggingExtensions.IdentityLoggerAdapter"
}
}
],
"traceId": "ddaa4e9c5a223c774e0b3817d88d173e",
"spanId": "86430313a5338a30"
}, {
"timeUnixNano": "1670858470630982500",
"severityNumber": 9,
"severityText": "Information",
"body": {
"stringValue": "IDX10245: Creating claims identity from the validated token: '[PII of type 'System.IdentityModel.Tokens.Jwt.JwtSecurityToken' is hidden. For more details, see https://aka.ms/IdentityModel/PII.]'."
},
"attributes": [{
"key": "dotnet.ilogger.category",
"value": {
"stringValue": "Microsoft.IdentityModel.LoggingExtensions.IdentityLoggerAdapter"
}
}
],
"traceId": "ddaa4e9c5a223c774e0b3817d88d173e",
"spanId": "86430313a5338a30"
}, {
"timeUnixNano": "1670858470631025800",
"severityNumber": 9,
"severityText": "Information",
"body": {
"stringValue": "IDX10241: Security token validated. token: '[PII of type 'System.String' is hidden. For more details, see https://aka.ms/IdentityModel/PII.]'."
},
"attributes": [{
"key": "dotnet.ilogger.category",
"value": {
"stringValue": "Microsoft.IdentityModel.LoggingExtensions.IdentityLoggerAdapter"
}
}
],
"traceId": "ddaa4e9c5a223c774e0b3817d88d173e",
"spanId": "86430313a5338a30"
}
]
}
]
}
]
} Notice how all logs have the trace/span IDs, but scope info is missing entirely: "scopeLogs": [{
"scope": {}, EDIT: I'll look into potential changes in the AspNetCore instrumentation library. |
@dineshg13 , I'm seeing the 413 error again right now, using latest collector:
Still not seeing any traces after bumping every single OTEL library to their latest versions. |
I figured why I was not seeing tracing info and it was due to something else entirely: I'll still leave this one open since the 413 responses are still happening, even if they are unrelated to the lack of traces. |
@julealgon whats the size of the log you are trying to submit to Datadog ? |
How can I get that information @dineshg13 ? We have many projects currently sending log information through the collector, so I don't precisely know which particular source is sending potentially big logs that could be causing this error. The logs from the exporter also don't mention anything about which payload is failing. I'd need help in identifying what payload is actually causing this problem. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
@dineshg13 can you confirm whether this might still be an issue or not? |
@julealgon we have fixed this issue. Can you please try using latest collector-contrib image ? |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
@julealgon were u able to try this ? |
I did update the collector a while ago and the problem seemed to go away. |
Thanks for following up @julealgon! I am closing this based on your comment :) |
Component(s)
exporter/datadog
What happened?
Description
I've observed one of our applications has stopped pushing logs via the collector, and then double checked the error log in the VM running the collector.
It appears to be failing with a
413 Request Entity Too Large
error, retrying multiple times, and then discarding the data.Steps to Reproduce
Unknown at this point what is causing this.
Expected Result
No exceptions and logs flowing through to datadog.
Actual Result
Logs are not flowing for this particular application.
Collector version
0.62.1
Environment information
Environment
OS: (e.g., "Windows Server 2019 DataCenter 10.0.17763.3650")
OpenTelemetry Collector configuration
Log output
Additional context
Nothing relevant at this point.
The text was updated successfully, but these errors were encountered: