Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fixed the issue that windows cannot contain filenames with colons (replaced ':' with '+') #1700

Merged
merged 16 commits into from
Oct 25, 2022

Conversation

kilian-kier
Copy link
Contributor

@kilian-kier kilian-kier commented Aug 29, 2022

What this PR does:
It saves wal files with '+' filename separator, instead of ':' , because Windows does not allow it

Checklist

  • Tests updated
  • Documentation added
  • CHANGELOG.md updated - the order of entries should be [CHANGE], [FEATURE], [ENHANCEMENT], [BUGFIX]

@CLAassistant
Copy link

CLAassistant commented Aug 29, 2022

CLA assistant check
All committers have signed the CLA.

@mdisibio
Copy link
Contributor

mdisibio commented Sep 2, 2022

Hi, thanks for fixing this. We were very surprised to hear that Tempo doesn't work on Windows, since that file naming hasn't changed recently, it must have been broken for a long time or never worked.

I agree with the overall direction that we want to change the naming convention of the wal files but I'd like to avoid OS-specific logic and have Tempo run the same everywhere. That changes the scope of this PR quite a bit, because we'd need to be backwards-compatible and parse existing files with the old separator (and new files are written with the new separator). Also, would like to find an alternative to # for readability, but it must not conflict with the tenant ID rules since we include the tenantID in the file name. Don't have a recommendation yet.

@khoaminhbui
Copy link

khoaminhbui commented Sep 12, 2022

I am coming from the problem of configuring Tempo on Windows. Based on the concerns stated above, I think a file name with : or # looks weird. Can we use . instead? Because this is not a valid character in tenant ID rules and normal in file name.
About backward compatibility, if we dont want to use os specific logic, and we dont want to migrate, then the only way is to use a separator for all platforms and in the logic need to take care of both old (:) and new separator when splitting.

And, please make it go asap, it's not working on Windows and you know how prioritized it should be.

@mdisibio
Copy link
Contributor

Can we use . instead?

No, . is a valid character according to the tenant ID rules when combined with other characters (just that . and .. IDs are disallowed).

At the risk of bike-shedding, how about +? Compared to the others it looks fairly readable, doesn't conflict with tenant ID rules, and should be supported on NTFS.

da28ebdb-074b-4e3a-97e3-39676f79f7f7:single-tenant:v2:snappy:v2
da28ebdb-074b-4e3a-97e3-39676f79f7f7#single-tenant#v2#snappy#v2
da28ebdb-074b-4e3a-97e3-39676f79f7f7+single-tenant+v2+snappy+v2

Finally, @kilian-kier are you interested in doing the non-OS specific approach on this PR? We would change all wal file names to the new separator, but check for : in a backwards-compatible way, and remove the OS check. If not, I think it's ok to go ahead and merge this to get Tempo working on Windows, and we can come follow up on this later.

@kilian-kier
Copy link
Contributor Author

kilian-kier commented Sep 14, 2022

Finally, @kilian-kier are you interested in doing the non-OS specific approach on this PR?

Yes i can do it.

Is it enough just to look if there is a : in the filename and if not it should check for +?

@mdisibio
Copy link
Contributor

Finally, @kilian-kier are you interested in doing the non-OS specific approach on this PR?

Yes i can do it.

Is it enough just to look if there is a : in the filename and if not it should check for +?

Great. Yes that sounds good, but reverse the order to check for the new separator + first.

@kilian-kier
Copy link
Contributor Author

@mdisibio it should work now

@kilian-kier kilian-kier changed the title fixed the issue that windows cannot contain filenames with colons (replaced ':' with '#') fixed the issue that windows cannot contain filenames with colons (replaced ':' with '+') Sep 18, 2022
@mdisibio
Copy link
Contributor

@kilian-kier There some tests that need to be updated for the new separator. Here is an example. To run the tests locally execute make test.

--- FAIL: TestFullFilename (0.00s)
    --- FAIL: TestFullFilename/legacy (0.00s)
        append_block_test.go:110: 
            	Error Trace:	append_block_test.go:110
            	Error:      	Not equal: 
            	            	expected: "/blerg/123e4567-e89b-12d3-a456-426614174000:foo"
            	            	actual  : "/blerg/123e4567-e89b-12d3-a456-426614174000+foo"

@kilian-kier
Copy link
Contributor Author

@mdisibio ok finally also the tests should work

# Conflicts:
#	CHANGELOG.md
#	tempodb/wal/append_block.go
#	tempodb/wal/append_block_test.go
#	tempodb/wal/wal.go
@mapno
Copy link
Member

mapno commented Oct 3, 2022

Hi @kilian-kier. We recently updated opentelemetry-proto to v0.18.0. You need to update the submodule to that version to be up-to-date with that change, so it doesn't appear in this PR. You can run $ cd opentelemetry && git checkout v0.18.0 and that should do it. Thanks!

@kilian-kier
Copy link
Contributor Author

Hi @kilian-kier. We recently updated opentelemetry-proto to v0.18.0. You need to update the submodule to that version to be up-to-date with that change, so it doesn't appear in this PR. You can run $ cd opentelemetry && git checkout v0.18.0 and that should do it. Thanks!

@mapno Done!

Copy link
Contributor

@mdisibio mdisibio left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kilian-kier Thanks a lot, great to see Windows-support fixed.

@mdisibio mdisibio merged commit b11ae37 into grafana:main Oct 25, 2022
@rubytech-avsorokin
Copy link

rubytech-avsorokin commented Nov 9, 2022

Hi there!

It does not work for me (

I've built main branch from sources using linux VM: env GOOS=windows make tempo.
Fix has been applied.

I'm very new with Go, doing somthing wrong?

Still getting this

PS C:\Tempo> .\tempo.exe "-config.file=.\config\config.yaml"
level=info ts=2022-11-09T13:57:07.8178917Z caller=main.go:197 msg="initialising OpenTracing tracer"
level=info ts=2022-11-09T13:57:07.825193Z caller=main.go:114 msg="Starting Tempo" version="(version=main-16fc036, branch=main, revision=16fc0360)"
level=info ts=2022-11-09T13:57:07.830709Z caller=server.go:323 http=[::]:3200 grpc=[::]:9095 msg="server listening on addresses"
ts=2022-11-09T13:57:07Z level=info msg="OTel Shim Logger Initialized" component=tempo
level=warn ts=2022-11-09T13:57:07.8363173Z caller=modules.go:181 msg="Worker address is empty in single binary mode. Attempting automatic worker configuration. If queries are unresponsive consider configuring the worker explicitly." address=127.0.0.1:9095
level=info ts=2022-11-09T13:57:07.8373672Z caller=worker.go:103 msg="Starting querier worker connected to query-frontend" frontend=127.0.0.1:9095
level=info ts=2022-11-09T13:57:07.8380989Z caller=frontend.go:43 msg="creating middleware in query frontend"
level=info ts=2022-11-09T13:57:07.8386207Z caller=tempodb.go:428 msg="polling enabled" interval=5m0s concurrency=50
level=info ts=2022-11-09T13:57:07.8432781Z caller=module_service.go:82 msg=initialising module=store
level=info ts=2022-11-09T13:57:07.8432781Z caller=module_service.go:82 msg=initialising module=server
level=info ts=2022-11-09T13:57:07.8449086Z caller=module_service.go:82 msg=initialising module=memberlist-kv
level=info ts=2022-11-09T13:57:07.8454201Z caller=module_service.go:82 msg=initialising module=overrides
level=info ts=2022-11-09T13:57:07.8465714Z caller=module_service.go:82 msg=initialising module=usage-report
level=info ts=2022-11-09T13:57:07.8465714Z caller=module_service.go:82 msg=initialising module=ring
level=info ts=2022-11-09T13:57:07.8518842Z caller=ring.go:263 msg="ring doesn't exist in KV store yet"
level=info ts=2022-11-09T13:57:07.8510911Z caller=module_service.go:82 msg=initialising module=compactor
level=info ts=2022-11-09T13:57:07.8510911Z caller=module_service.go:82 msg=initialising module=query-frontend
level=info ts=2022-11-09T13:57:07.8510911Z caller=module_service.go:82 msg=initialising module=ingester
level=info ts=2022-11-09T13:57:07.8534595Z caller=client.go:255 msg="value is nil" key=collectors/ring index=2
level=info ts=2022-11-09T13:57:07.8534595Z caller=module_service.go:82 msg=initialising module=distributor
level=info ts=2022-11-09T13:57:07.8534595Z caller=module_service.go:82 msg=initialising module=querier
level=info ts=2022-11-09T13:57:07.8539693Z caller=tempodb.go:428 msg="polling enabled" interval=5m0s concurrency=50
level=info ts=2022-11-09T13:57:07.8556157Z caller=ingester.go:327 msg="beginning wal replay"
ts=2022-11-09T13:57:07Z level=info msg="Starting GRPC server on endpoint 0.0.0.0:4317" component=tempo
level=warn ts=2022-11-09T13:57:07.8587147Z caller=rescan_blocks.go:22 msg="failed to open search wal directory" err="open C:\\Tempo\\wal\\search: The system cannot find the file specified."
ts=2022-11-09T13:57:07Z level=info msg="Starting HTTP server on endpoint 0.0.0.0:4318" component=tempo
level=info ts=2022-11-09T13:57:07.8629193Z caller=ingester.go:397 msg="wal replay complete"
level=info ts=2022-11-09T13:57:07.8654456Z caller=ingester.go:411 msg="reloading local blocks" tenants=0
ts=2022-11-09T13:57:07Z level=info msg="No sampling strategies provided or URL is unavailable, using defaults" component=tempo
level=info ts=2022-11-09T13:57:07.8626738Z caller=worker.go:179 msg="adding connection" addr=127.0.0.1:9095
level=info ts=2022-11-09T13:57:07.8669943Z caller=compactor.go:161 msg="enabling compaction"
level=info ts=2022-11-09T13:57:07.8687574Z caller=lifecycler.go:547 msg="not loading tokens from file, tokens file path is empty"
level=info ts=2022-11-09T13:57:07.872779Z caller=lifecycler.go:576 msg="instance not found in ring, adding with no tokens" ring=ingester
level=info ts=2022-11-09T13:57:07.8732905Z caller=lifecycler.go:416 msg="auto-joining cluster after timeout" ring=ingester
level=info ts=2022-11-09T13:57:07.8736702Z caller=tempodb.go:402 msg="compaction and retention enabled."
level=info ts=2022-11-09T13:57:07.8687574Z caller=app.go:195 msg="Tempo started"
level=warn ts=2022-11-09T13:57:20.3859792Z caller=grpc_logging.go:43 method=/tempopb.Pusher/PushBytesV2 duration=0s err="open C:\\Tempo\\wal\\942e9eed-19ce-4fdf-8230-963789e0ec0d:single-tenant:v2:snappy:v2: The filename, directory name, or volume label syntax is incorrect." msg=gRPC
level=error ts=2022-11-09T13:57:20.3865085Z caller=rate_limited_logger.go:27 msg="pusher failed to consume trace data" err="rpc error: code = Unknown desc = open C:\\Tempo\\wal\\942e9eed-19ce-4fdf-8230-963789e0ec0d:single-tenant:v2:snappy:v2: The filename, directory name, or volume label syntax is incorrect."

I'm using this config:


search_enabled: true
metrics_generator_enabled: false

server:
  http_listen_port: 3200

distributor:
  receivers:                           # this configuration will listen on all ports and protocols that tempo is capable of.
    jaeger:                            # the receives all come from the OpenTelemetry collector.  more configuration information can
      protocols:                       # be found there: https://github.com/open-telemetry/opentelemetry-collector/tree/main/receiver
        thrift_http:                   #
        grpc:                          # for a production deployment you should only enable the receivers you need!
        thrift_binary:
        thrift_compact:
    zipkin:
    otlp:
      protocols:
        http:
        grpc:
    opencensus:

ingester:
  trace_idle_period: 10s               # the length of time after a trace has not received spans to consider it complete and flush it
  max_block_bytes: 1_000_000           # cut the head block when it hits this size or ...
  max_block_duration: 5m               #   this much time passes

compactor:
  compaction:
    compaction_window: 1h              # blocks in this time window will be compacted together
    max_block_bytes: 100_000_000        # maximum size of compacted blocks
    block_retention: 1h
    compacted_block_retention: 10m
    flush_size_bytes: 5242880

# metrics_generator:
#   registry:
#     external_labels:
#       source: tempo
#       cluster: docker-compose
#   storage:
#     path: /tmp/tempo/generator/wal
#     remote_write:
#       - url: http://prometheus:9090/api/v1/write
#         send_exemplars: true

storage:
  trace:
    backend: s3                        # backend configuration to use
    block:
      bloom_filter_false_positive: .05 # bloom filter false positive rate.  lower values create larger filters but fewer false positives
      index_downsample_bytes: 1000     # number of bytes per index record
      encoding: zstd                   # block encoding/compression.  options: none, gzip, lz4-64k, lz4-256k, lz4-1M, lz4, snappy, zstd, s2
    wal:
      path: "C:\\Tempo\\wal"             # where to store the the wal locally
      encoding: snappy                 # wal encoding/compression.  options: none, gzip, lz4-64k, lz4-256k, lz4-1M, lz4, snappy, zstd, s2
    s3:
      bucket: tempo                    # how to store data in s3
      endpoint: 127.0.0.1:9000
      access_key: ***
      secret_key: ***
      insecure: true
      # For using AWS, select the appropriate regional endpoint and region
      # endpoint: s3.dualstack.us-west-2.amazonaws.com
      # region: us-west-2
    pool:
      max_workers: 100                 # worker pool determines the number of parallel requests to the object store backend
      queue_depth: 10000

#overrides:
#  metrics_generator_processors: [service-graphs, span-metrics]


Regards.

@henrikschristensen
Copy link

Hi there!

It does not work for me (

I've built main branch from sources using linux VM: env GOOS=windows make tempo. Fix has been applied.

I'm very new with Go, doing somthing wrong?

Still getting this

PS C:\Tempo> .\tempo.exe "-config.file=.\config\config.yaml"
level=info ts=2022-11-09T13:57:07.8178917Z caller=main.go:197 msg="initialising OpenTracing tracer"
level=info ts=2022-11-09T13:57:07.825193Z caller=main.go:114 msg="Starting Tempo" version="(version=main-16fc036, branch=main, revision=16fc0360)"
level=info ts=2022-11-09T13:57:07.830709Z caller=server.go:323 http=[::]:3200 grpc=[::]:9095 msg="server listening on addresses"
ts=2022-11-09T13:57:07Z level=info msg="OTel Shim Logger Initialized" component=tempo
level=warn ts=2022-11-09T13:57:07.8363173Z caller=modules.go:181 msg="Worker address is empty in single binary mode. Attempting automatic worker configuration. If queries are unresponsive consider configuring the worker explicitly." address=127.0.0.1:9095
level=info ts=2022-11-09T13:57:07.8373672Z caller=worker.go:103 msg="Starting querier worker connected to query-frontend" frontend=127.0.0.1:9095
level=info ts=2022-11-09T13:57:07.8380989Z caller=frontend.go:43 msg="creating middleware in query frontend"
level=info ts=2022-11-09T13:57:07.8386207Z caller=tempodb.go:428 msg="polling enabled" interval=5m0s concurrency=50
level=info ts=2022-11-09T13:57:07.8432781Z caller=module_service.go:82 msg=initialising module=store
level=info ts=2022-11-09T13:57:07.8432781Z caller=module_service.go:82 msg=initialising module=server
level=info ts=2022-11-09T13:57:07.8449086Z caller=module_service.go:82 msg=initialising module=memberlist-kv
level=info ts=2022-11-09T13:57:07.8454201Z caller=module_service.go:82 msg=initialising module=overrides
level=info ts=2022-11-09T13:57:07.8465714Z caller=module_service.go:82 msg=initialising module=usage-report
level=info ts=2022-11-09T13:57:07.8465714Z caller=module_service.go:82 msg=initialising module=ring
level=info ts=2022-11-09T13:57:07.8518842Z caller=ring.go:263 msg="ring doesn't exist in KV store yet"
level=info ts=2022-11-09T13:57:07.8510911Z caller=module_service.go:82 msg=initialising module=compactor
level=info ts=2022-11-09T13:57:07.8510911Z caller=module_service.go:82 msg=initialising module=query-frontend
level=info ts=2022-11-09T13:57:07.8510911Z caller=module_service.go:82 msg=initialising module=ingester
level=info ts=2022-11-09T13:57:07.8534595Z caller=client.go:255 msg="value is nil" key=collectors/ring index=2
level=info ts=2022-11-09T13:57:07.8534595Z caller=module_service.go:82 msg=initialising module=distributor
level=info ts=2022-11-09T13:57:07.8534595Z caller=module_service.go:82 msg=initialising module=querier
level=info ts=2022-11-09T13:57:07.8539693Z caller=tempodb.go:428 msg="polling enabled" interval=5m0s concurrency=50
level=info ts=2022-11-09T13:57:07.8556157Z caller=ingester.go:327 msg="beginning wal replay"
ts=2022-11-09T13:57:07Z level=info msg="Starting GRPC server on endpoint 0.0.0.0:4317" component=tempo
level=warn ts=2022-11-09T13:57:07.8587147Z caller=rescan_blocks.go:22 msg="failed to open search wal directory" err="open C:\\Tempo\\wal\\search: The system cannot find the file specified."
ts=2022-11-09T13:57:07Z level=info msg="Starting HTTP server on endpoint 0.0.0.0:4318" component=tempo
level=info ts=2022-11-09T13:57:07.8629193Z caller=ingester.go:397 msg="wal replay complete"
level=info ts=2022-11-09T13:57:07.8654456Z caller=ingester.go:411 msg="reloading local blocks" tenants=0
ts=2022-11-09T13:57:07Z level=info msg="No sampling strategies provided or URL is unavailable, using defaults" component=tempo
level=info ts=2022-11-09T13:57:07.8626738Z caller=worker.go:179 msg="adding connection" addr=127.0.0.1:9095
level=info ts=2022-11-09T13:57:07.8669943Z caller=compactor.go:161 msg="enabling compaction"
level=info ts=2022-11-09T13:57:07.8687574Z caller=lifecycler.go:547 msg="not loading tokens from file, tokens file path is empty"
level=info ts=2022-11-09T13:57:07.872779Z caller=lifecycler.go:576 msg="instance not found in ring, adding with no tokens" ring=ingester
level=info ts=2022-11-09T13:57:07.8732905Z caller=lifecycler.go:416 msg="auto-joining cluster after timeout" ring=ingester
level=info ts=2022-11-09T13:57:07.8736702Z caller=tempodb.go:402 msg="compaction and retention enabled."
level=info ts=2022-11-09T13:57:07.8687574Z caller=app.go:195 msg="Tempo started"
level=warn ts=2022-11-09T13:57:20.3859792Z caller=grpc_logging.go:43 method=/tempopb.Pusher/PushBytesV2 duration=0s err="open C:\\Tempo\\wal\\942e9eed-19ce-4fdf-8230-963789e0ec0d:single-tenant:v2:snappy:v2: The filename, directory name, or volume label syntax is incorrect." msg=gRPC
level=error ts=2022-11-09T13:57:20.3865085Z caller=rate_limited_logger.go:27 msg="pusher failed to consume trace data" err="rpc error: code = Unknown desc = open C:\\Tempo\\wal\\942e9eed-19ce-4fdf-8230-963789e0ec0d:single-tenant:v2:snappy:v2: The filename, directory name, or volume label syntax is incorrect."

I'm using this config:


search_enabled: true
metrics_generator_enabled: false

server:
  http_listen_port: 3200

distributor:
  receivers:                           # this configuration will listen on all ports and protocols that tempo is capable of.
    jaeger:                            # the receives all come from the OpenTelemetry collector.  more configuration information can
      protocols:                       # be found there: https://github.com/open-telemetry/opentelemetry-collector/tree/main/receiver
        thrift_http:                   #
        grpc:                          # for a production deployment you should only enable the receivers you need!
        thrift_binary:
        thrift_compact:
    zipkin:
    otlp:
      protocols:
        http:
        grpc:
    opencensus:

ingester:
  trace_idle_period: 10s               # the length of time after a trace has not received spans to consider it complete and flush it
  max_block_bytes: 1_000_000           # cut the head block when it hits this size or ...
  max_block_duration: 5m               #   this much time passes

compactor:
  compaction:
    compaction_window: 1h              # blocks in this time window will be compacted together
    max_block_bytes: 100_000_000        # maximum size of compacted blocks
    block_retention: 1h
    compacted_block_retention: 10m
    flush_size_bytes: 5242880

# metrics_generator:
#   registry:
#     external_labels:
#       source: tempo
#       cluster: docker-compose
#   storage:
#     path: /tmp/tempo/generator/wal
#     remote_write:
#       - url: http://prometheus:9090/api/v1/write
#         send_exemplars: true

storage:
  trace:
    backend: s3                        # backend configuration to use
    block:
      bloom_filter_false_positive: .05 # bloom filter false positive rate.  lower values create larger filters but fewer false positives
      index_downsample_bytes: 1000     # number of bytes per index record
      encoding: zstd                   # block encoding/compression.  options: none, gzip, lz4-64k, lz4-256k, lz4-1M, lz4, snappy, zstd, s2
    wal:
      path: "C:\\Tempo\\wal"             # where to store the the wal locally
      encoding: snappy                 # wal encoding/compression.  options: none, gzip, lz4-64k, lz4-256k, lz4-1M, lz4, snappy, zstd, s2
    s3:
      bucket: tempo                    # how to store data in s3
      endpoint: 127.0.0.1:9000
      access_key: ***
      secret_key: ***
      insecure: true
      # For using AWS, select the appropriate regional endpoint and region
      # endpoint: s3.dualstack.us-west-2.amazonaws.com
      # region: us-west-2
    pool:
      max_workers: 100                 # worker pool determines the number of parallel requests to the object store backend
      queue_depth: 10000

#overrides:
#  metrics_generator_processors: [service-graphs, span-metrics]

Regards.

Did you resolve this prob?
Have the same problem. Tried with kilians branch and tried with main branch. Still fails to write files to wal. It is strange because looking at the code it should be fixed. It is as though the build does something odd.

@SuhutDev
Copy link

SuhutDev commented Apr 9, 2023

Hi there!
It does not work for me (
I've built main branch from sources using linux VM: env GOOS=windows make tempo. Fix has been applied.
I'm very new with Go, doing somthing wrong?
Still getting this

PS C:\Tempo> .\tempo.exe "-config.file=.\config\config.yaml"
level=info ts=2022-11-09T13:57:07.8178917Z caller=main.go:197 msg="initialising OpenTracing tracer"
level=info ts=2022-11-09T13:57:07.825193Z caller=main.go:114 msg="Starting Tempo" version="(version=main-16fc036, branch=main, revision=16fc0360)"
level=info ts=2022-11-09T13:57:07.830709Z caller=server.go:323 http=[::]:3200 grpc=[::]:9095 msg="server listening on addresses"
ts=2022-11-09T13:57:07Z level=info msg="OTel Shim Logger Initialized" component=tempo
level=warn ts=2022-11-09T13:57:07.8363173Z caller=modules.go:181 msg="Worker address is empty in single binary mode. Attempting automatic worker configuration. If queries are unresponsive consider configuring the worker explicitly." address=127.0.0.1:9095
level=info ts=2022-11-09T13:57:07.8373672Z caller=worker.go:103 msg="Starting querier worker connected to query-frontend" frontend=127.0.0.1:9095
level=info ts=2022-11-09T13:57:07.8380989Z caller=frontend.go:43 msg="creating middleware in query frontend"
level=info ts=2022-11-09T13:57:07.8386207Z caller=tempodb.go:428 msg="polling enabled" interval=5m0s concurrency=50
level=info ts=2022-11-09T13:57:07.8432781Z caller=module_service.go:82 msg=initialising module=store
level=info ts=2022-11-09T13:57:07.8432781Z caller=module_service.go:82 msg=initialising module=server
level=info ts=2022-11-09T13:57:07.8449086Z caller=module_service.go:82 msg=initialising module=memberlist-kv
level=info ts=2022-11-09T13:57:07.8454201Z caller=module_service.go:82 msg=initialising module=overrides
level=info ts=2022-11-09T13:57:07.8465714Z caller=module_service.go:82 msg=initialising module=usage-report
level=info ts=2022-11-09T13:57:07.8465714Z caller=module_service.go:82 msg=initialising module=ring
level=info ts=2022-11-09T13:57:07.8518842Z caller=ring.go:263 msg="ring doesn't exist in KV store yet"
level=info ts=2022-11-09T13:57:07.8510911Z caller=module_service.go:82 msg=initialising module=compactor
level=info ts=2022-11-09T13:57:07.8510911Z caller=module_service.go:82 msg=initialising module=query-frontend
level=info ts=2022-11-09T13:57:07.8510911Z caller=module_service.go:82 msg=initialising module=ingester
level=info ts=2022-11-09T13:57:07.8534595Z caller=client.go:255 msg="value is nil" key=collectors/ring index=2
level=info ts=2022-11-09T13:57:07.8534595Z caller=module_service.go:82 msg=initialising module=distributor
level=info ts=2022-11-09T13:57:07.8534595Z caller=module_service.go:82 msg=initialising module=querier
level=info ts=2022-11-09T13:57:07.8539693Z caller=tempodb.go:428 msg="polling enabled" interval=5m0s concurrency=50
level=info ts=2022-11-09T13:57:07.8556157Z caller=ingester.go:327 msg="beginning wal replay"
ts=2022-11-09T13:57:07Z level=info msg="Starting GRPC server on endpoint 0.0.0.0:4317" component=tempo
level=warn ts=2022-11-09T13:57:07.8587147Z caller=rescan_blocks.go:22 msg="failed to open search wal directory" err="open C:\\Tempo\\wal\\search: The system cannot find the file specified."
ts=2022-11-09T13:57:07Z level=info msg="Starting HTTP server on endpoint 0.0.0.0:4318" component=tempo
level=info ts=2022-11-09T13:57:07.8629193Z caller=ingester.go:397 msg="wal replay complete"
level=info ts=2022-11-09T13:57:07.8654456Z caller=ingester.go:411 msg="reloading local blocks" tenants=0
ts=2022-11-09T13:57:07Z level=info msg="No sampling strategies provided or URL is unavailable, using defaults" component=tempo
level=info ts=2022-11-09T13:57:07.8626738Z caller=worker.go:179 msg="adding connection" addr=127.0.0.1:9095
level=info ts=2022-11-09T13:57:07.8669943Z caller=compactor.go:161 msg="enabling compaction"
level=info ts=2022-11-09T13:57:07.8687574Z caller=lifecycler.go:547 msg="not loading tokens from file, tokens file path is empty"
level=info ts=2022-11-09T13:57:07.872779Z caller=lifecycler.go:576 msg="instance not found in ring, adding with no tokens" ring=ingester
level=info ts=2022-11-09T13:57:07.8732905Z caller=lifecycler.go:416 msg="auto-joining cluster after timeout" ring=ingester
level=info ts=2022-11-09T13:57:07.8736702Z caller=tempodb.go:402 msg="compaction and retention enabled."
level=info ts=2022-11-09T13:57:07.8687574Z caller=app.go:195 msg="Tempo started"
level=warn ts=2022-11-09T13:57:20.3859792Z caller=grpc_logging.go:43 method=/tempopb.Pusher/PushBytesV2 duration=0s err="open C:\\Tempo\\wal\\942e9eed-19ce-4fdf-8230-963789e0ec0d:single-tenant:v2:snappy:v2: The filename, directory name, or volume label syntax is incorrect." msg=gRPC
level=error ts=2022-11-09T13:57:20.3865085Z caller=rate_limited_logger.go:27 msg="pusher failed to consume trace data" err="rpc error: code = Unknown desc = open C:\\Tempo\\wal\\942e9eed-19ce-4fdf-8230-963789e0ec0d:single-tenant:v2:snappy:v2: The filename, directory name, or volume label syntax is incorrect."

I'm using this config:


search_enabled: true
metrics_generator_enabled: false

server:
  http_listen_port: 3200

distributor:
  receivers:                           # this configuration will listen on all ports and protocols that tempo is capable of.
    jaeger:                            # the receives all come from the OpenTelemetry collector.  more configuration information can
      protocols:                       # be found there: https://github.com/open-telemetry/opentelemetry-collector/tree/main/receiver
        thrift_http:                   #
        grpc:                          # for a production deployment you should only enable the receivers you need!
        thrift_binary:
        thrift_compact:
    zipkin:
    otlp:
      protocols:
        http:
        grpc:
    opencensus:

ingester:
  trace_idle_period: 10s               # the length of time after a trace has not received spans to consider it complete and flush it
  max_block_bytes: 1_000_000           # cut the head block when it hits this size or ...
  max_block_duration: 5m               #   this much time passes

compactor:
  compaction:
    compaction_window: 1h              # blocks in this time window will be compacted together
    max_block_bytes: 100_000_000        # maximum size of compacted blocks
    block_retention: 1h
    compacted_block_retention: 10m
    flush_size_bytes: 5242880

# metrics_generator:
#   registry:
#     external_labels:
#       source: tempo
#       cluster: docker-compose
#   storage:
#     path: /tmp/tempo/generator/wal
#     remote_write:
#       - url: http://prometheus:9090/api/v1/write
#         send_exemplars: true

storage:
  trace:
    backend: s3                        # backend configuration to use
    block:
      bloom_filter_false_positive: .05 # bloom filter false positive rate.  lower values create larger filters but fewer false positives
      index_downsample_bytes: 1000     # number of bytes per index record
      encoding: zstd                   # block encoding/compression.  options: none, gzip, lz4-64k, lz4-256k, lz4-1M, lz4, snappy, zstd, s2
    wal:
      path: "C:\\Tempo\\wal"             # where to store the the wal locally
      encoding: snappy                 # wal encoding/compression.  options: none, gzip, lz4-64k, lz4-256k, lz4-1M, lz4, snappy, zstd, s2
    s3:
      bucket: tempo                    # how to store data in s3
      endpoint: 127.0.0.1:9000
      access_key: ***
      secret_key: ***
      insecure: true
      # For using AWS, select the appropriate regional endpoint and region
      # endpoint: s3.dualstack.us-west-2.amazonaws.com
      # region: us-west-2
    pool:
      max_workers: 100                 # worker pool determines the number of parallel requests to the object store backend
      queue_depth: 10000

#overrides:
#  metrics_generator_processors: [service-graphs, span-metrics]

Regards.

Did you resolve this prob? Have the same problem. Tried with kilians branch and tried with main branch. Still fails to write files to wal. It is strange because looking at the code it should be fixed. It is as though the build does something odd.

try this :

storage:
trace:
backend: local
local:
path: /var/tempo

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants