Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add kafka producer for yorkie analytics #1143

Draft
wants to merge 10 commits into
base: main
Choose a base branch
from
Draft

Add kafka producer for yorkie analytics #1143

wants to merge 10 commits into from

Conversation

emplam27
Copy link
Contributor

@emplam27 emplam27 commented Feb 6, 2025

What this PR does / why we need it:

Add kafka producer for Yorkie analytics MAU measurement.

yorkie server --kafka-address localhost:29092  --kafka-topic user-events

Which issue(s) this PR fixes:

Address #1130

Special notes for your reviewer:

Does this PR introduce a user-facing change?:


Additional documentation:


Checklist:

  • Added relevant tests or not required
  • Addressed and resolved all CodeRabbit review comments
  • Didn't break anything

Summary by CodeRabbit

  • Documentation

    • Improved API documentation formatting for enhanced clarity and consistency.
  • New Features

    • Expanded client activation to support user identifiers and detailed metadata.
    • Integrated a Kafka-based message broker to enable robust event publishing.
    • Introduced enhanced configuration options and updated container deployment settings, including an external Kafka listener.

@emplam27 emplam27 self-assigned this Feb 6, 2025
Copy link

coderabbitai bot commented Feb 6, 2025

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

The pull request introduces formatting improvements across multiple OpenAPI YAML files by consolidating multi-line descriptions, standardizing quotation styles, and aligning indentations. It adds new client event types and extends the protocol definition with additional fields. Furthermore, the PR integrates Kafka support into various server components through new configuration options, dependency updates, and a dedicated message broker package. Test cases have been updated to accommodate the additional backend configuration parameter.

Changes

File(s) Change Summary
api/docs/.../admin.openapi.yaml, api/docs/.../cluster.openapi.yaml, api/docs/.../resources.openapi.yaml, api/docs/.../yorkie.openapi.yaml Reformatted OpenAPI specs: consolidated multi-line descriptions, standardized $ref quoting to single quotes, and adjusted indentations in info, components, and tags sections.
api/types/events/events.go, api/yorkie/v1/yorkie.proto Added new client event types (ClientEventType, ClientEvent) and extended ActivateClientRequest with additional fields (user_id, metadata).
build/docker/analytics/docker-compose.yml, build/docker/analytics/init-user-events-db.sql Updated Docker Compose for Kafka configuration (port mapping, listeners, healthcheck revisions) and changed the metadata column type from STRING to JSON in the SQL script.
cmd/yorkie/server.go, go.mod, server/config.go, server/config.sample.yml Introduced Kafka integration: added new CLI flags for Kafka, updated dependencies (e.g., added kafka-go), and included Kafka configuration fields in server settings.
server/backend/backend.go Updated the backend initialization to accept a new Kafka configuration parameter and incorporated a new MsgBroker field for message production.
server/backend/messagebroker/* Created a new message broker package with a Config struct, a DummyBroker for fallback, a KafkaBroker implementation for real Kafka integration, and defined the Message interface with its concrete implementation (UserEventMessage).
server/rpc/yorkie_server.go, server/server.go Updated the ActivateClient and server initialization flows to use the new message broker for publishing client activation events and to pass Kafka configuration accordingly.
Test files:
server/packs/packs_test.go, server/rpc/server_test.go, test/bench/push_pull_bench_test.go, test/complex/main_test.go, test/integration/housekeeping_test.go
Modified test setups to pass an additional nil parameter reflecting the updated backend.New function signature for Kafka configuration.

Sequence Diagram(s)

sequenceDiagram
    participant C as Client
    participant YS as YorkieServer (RPC)
    participant BE as Backend
    participant MB as Message Broker
    participant KB as KafkaBroker

    C->>YS: ActivateClient request
    YS->>BE: Process activation
    BE->>MB: Produce UserEventMessage
    alt Kafka Configured
      MB->>KB: Forward message via Kafka writer
      KB-->>MB: Acknowledge message write
    else No Kafka Config
      MB-->>BE: Use DummyBroker (no-operation)
    end
    BE-->>YS: Return activation response
    YS-->>C: Send response
Loading

Possibly related PRs

  • Introduce VersionVector #1047: Modifies OpenAPI schemas by introducing a new VersionVector feature, which relates directly to the schema updates and formatting consistency changes found in this PR.

Suggested labels

enhancement 🌟


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@hackerwins hackerwins force-pushed the kafka-config branch 3 times, most recently from 63b0091 to 7f4fd90 Compare February 6, 2025 11:28
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Go Benchmark

Benchmark suite Current: dafd831 Previous: 87cce01 Ratio
BenchmarkDocument/constructor_test 1499 ns/op 1385 B/op 24 allocs/op 1442 ns/op 1385 B/op 24 allocs/op 1.04
BenchmarkDocument/constructor_test - ns/op 1499 ns/op 1442 ns/op 1.04
BenchmarkDocument/constructor_test - B/op 1385 B/op 1385 B/op 1
BenchmarkDocument/constructor_test - allocs/op 24 allocs/op 24 allocs/op 1
BenchmarkDocument/status_test 1026 ns/op 1353 B/op 22 allocs/op 1054 ns/op 1353 B/op 22 allocs/op 0.97
BenchmarkDocument/status_test - ns/op 1026 ns/op 1054 ns/op 0.97
BenchmarkDocument/status_test - B/op 1353 B/op 1353 B/op 1
BenchmarkDocument/status_test - allocs/op 22 allocs/op 22 allocs/op 1
BenchmarkDocument/equals_test 7763 ns/op 7562 B/op 129 allocs/op 7916 ns/op 7562 B/op 129 allocs/op 0.98
BenchmarkDocument/equals_test - ns/op 7763 ns/op 7916 ns/op 0.98
BenchmarkDocument/equals_test - B/op 7562 B/op 7562 B/op 1
BenchmarkDocument/equals_test - allocs/op 129 allocs/op 129 allocs/op 1
BenchmarkDocument/nested_update_test 17065 ns/op 12307 B/op 258 allocs/op 16906 ns/op 12307 B/op 258 allocs/op 1.01
BenchmarkDocument/nested_update_test - ns/op 17065 ns/op 16906 ns/op 1.01
BenchmarkDocument/nested_update_test - B/op 12307 B/op 12307 B/op 1
BenchmarkDocument/nested_update_test - allocs/op 258 allocs/op 258 allocs/op 1
BenchmarkDocument/delete_test 22994 ns/op 15788 B/op 339 allocs/op 23166 ns/op 15788 B/op 339 allocs/op 0.99
BenchmarkDocument/delete_test - ns/op 22994 ns/op 23166 ns/op 0.99
BenchmarkDocument/delete_test - B/op 15788 B/op 15788 B/op 1
BenchmarkDocument/delete_test - allocs/op 339 allocs/op 339 allocs/op 1
BenchmarkDocument/object_test 8619 ns/op 7034 B/op 118 allocs/op 8635 ns/op 7034 B/op 118 allocs/op 1.00
BenchmarkDocument/object_test - ns/op 8619 ns/op 8635 ns/op 1.00
BenchmarkDocument/object_test - B/op 7034 B/op 7034 B/op 1
BenchmarkDocument/object_test - allocs/op 118 allocs/op 118 allocs/op 1
BenchmarkDocument/array_test 32222 ns/op 12139 B/op 273 allocs/op 33323 ns/op 12139 B/op 273 allocs/op 0.97
BenchmarkDocument/array_test - ns/op 32222 ns/op 33323 ns/op 0.97
BenchmarkDocument/array_test - B/op 12139 B/op 12139 B/op 1
BenchmarkDocument/array_test - allocs/op 273 allocs/op 273 allocs/op 1
BenchmarkDocument/text_test 32101 ns/op 15188 B/op 484 allocs/op 32055 ns/op 15188 B/op 484 allocs/op 1.00
BenchmarkDocument/text_test - ns/op 32101 ns/op 32055 ns/op 1.00
BenchmarkDocument/text_test - B/op 15188 B/op 15188 B/op 1
BenchmarkDocument/text_test - allocs/op 484 allocs/op 484 allocs/op 1
BenchmarkDocument/text_composition_test 31794 ns/op 18701 B/op 501 allocs/op 32501 ns/op 18701 B/op 501 allocs/op 0.98
BenchmarkDocument/text_composition_test - ns/op 31794 ns/op 32501 ns/op 0.98
BenchmarkDocument/text_composition_test - B/op 18701 B/op 18701 B/op 1
BenchmarkDocument/text_composition_test - allocs/op 501 allocs/op 501 allocs/op 1
BenchmarkDocument/rich_text_test 85624 ns/op 39352 B/op 1146 allocs/op 90445 ns/op 39358 B/op 1146 allocs/op 0.95
BenchmarkDocument/rich_text_test - ns/op 85624 ns/op 90445 ns/op 0.95
BenchmarkDocument/rich_text_test - B/op 39352 B/op 39358 B/op 1.00
BenchmarkDocument/rich_text_test - allocs/op 1146 allocs/op 1146 allocs/op 1
BenchmarkDocument/counter_test 18161 ns/op 11810 B/op 253 allocs/op 18334 ns/op 11810 B/op 253 allocs/op 0.99
BenchmarkDocument/counter_test - ns/op 18161 ns/op 18334 ns/op 0.99
BenchmarkDocument/counter_test - B/op 11810 B/op 11810 B/op 1
BenchmarkDocument/counter_test - allocs/op 253 allocs/op 253 allocs/op 1
BenchmarkDocument/text_edit_gc_100 1386140 ns/op 864927 B/op 17282 allocs/op 1412188 ns/op 864908 B/op 17282 allocs/op 0.98
BenchmarkDocument/text_edit_gc_100 - ns/op 1386140 ns/op 1412188 ns/op 0.98
BenchmarkDocument/text_edit_gc_100 - B/op 864927 B/op 864908 B/op 1.00
BenchmarkDocument/text_edit_gc_100 - allocs/op 17282 allocs/op 17282 allocs/op 1
BenchmarkDocument/text_edit_gc_1000 53057955 ns/op 46838360 B/op 185596 allocs/op 57222048 ns/op 46838739 B/op 185584 allocs/op 0.93
BenchmarkDocument/text_edit_gc_1000 - ns/op 53057955 ns/op 57222048 ns/op 0.93
BenchmarkDocument/text_edit_gc_1000 - B/op 46838360 B/op 46838739 B/op 1.00
BenchmarkDocument/text_edit_gc_1000 - allocs/op 185596 allocs/op 185584 allocs/op 1.00
BenchmarkDocument/text_split_gc_100 2105461 ns/op 1581064 B/op 15950 allocs/op 2151513 ns/op 1581001 B/op 15951 allocs/op 0.98
BenchmarkDocument/text_split_gc_100 - ns/op 2105461 ns/op 2151513 ns/op 0.98
BenchmarkDocument/text_split_gc_100 - B/op 1581064 B/op 1581001 B/op 1.00
BenchmarkDocument/text_split_gc_100 - allocs/op 15950 allocs/op 15951 allocs/op 1.00
BenchmarkDocument/text_split_gc_1000 128555061 ns/op 137791158 B/op 184999 allocs/op 129782372 ns/op 137790460 B/op 184996 allocs/op 0.99
BenchmarkDocument/text_split_gc_1000 - ns/op 128555061 ns/op 129782372 ns/op 0.99
BenchmarkDocument/text_split_gc_1000 - B/op 137791158 B/op 137790460 B/op 1.00
BenchmarkDocument/text_split_gc_1000 - allocs/op 184999 allocs/op 184996 allocs/op 1.00
BenchmarkDocument/text_delete_all_10000 17361960 ns/op 10578984 B/op 56142 allocs/op 18300206 ns/op 10575821 B/op 56132 allocs/op 0.95
BenchmarkDocument/text_delete_all_10000 - ns/op 17361960 ns/op 18300206 ns/op 0.95
BenchmarkDocument/text_delete_all_10000 - B/op 10578984 B/op 10575821 B/op 1.00
BenchmarkDocument/text_delete_all_10000 - allocs/op 56142 allocs/op 56132 allocs/op 1.00
BenchmarkDocument/text_delete_all_100000 291642665 ns/op 105517588 B/op 566038 allocs/op 279901785 ns/op 105513620 B/op 566013 allocs/op 1.04
BenchmarkDocument/text_delete_all_100000 - ns/op 291642665 ns/op 279901785 ns/op 1.04
BenchmarkDocument/text_delete_all_100000 - B/op 105517588 B/op 105513620 B/op 1.00
BenchmarkDocument/text_delete_all_100000 - allocs/op 566038 allocs/op 566013 allocs/op 1.00
BenchmarkDocument/text_100 233289 ns/op 120907 B/op 5181 allocs/op 244301 ns/op 120908 B/op 5181 allocs/op 0.95
BenchmarkDocument/text_100 - ns/op 233289 ns/op 244301 ns/op 0.95
BenchmarkDocument/text_100 - B/op 120907 B/op 120908 B/op 1.00
BenchmarkDocument/text_100 - allocs/op 5181 allocs/op 5181 allocs/op 1
BenchmarkDocument/text_1000 2466951 ns/op 1156078 B/op 51084 allocs/op 2576707 ns/op 1156087 B/op 51084 allocs/op 0.96
BenchmarkDocument/text_1000 - ns/op 2466951 ns/op 2576707 ns/op 0.96
BenchmarkDocument/text_1000 - B/op 1156078 B/op 1156087 B/op 1.00
BenchmarkDocument/text_1000 - allocs/op 51084 allocs/op 51084 allocs/op 1
BenchmarkDocument/array_1000 1252464 ns/op 1088312 B/op 11880 allocs/op 1333395 ns/op 1088396 B/op 11879 allocs/op 0.94
BenchmarkDocument/array_1000 - ns/op 1252464 ns/op 1333395 ns/op 0.94
BenchmarkDocument/array_1000 - B/op 1088312 B/op 1088396 B/op 1.00
BenchmarkDocument/array_1000 - allocs/op 11880 allocs/op 11879 allocs/op 1.00
BenchmarkDocument/array_10000 13477228 ns/op 9887957 B/op 120729 allocs/op 13677413 ns/op 9888732 B/op 120733 allocs/op 0.99
BenchmarkDocument/array_10000 - ns/op 13477228 ns/op 13677413 ns/op 0.99
BenchmarkDocument/array_10000 - B/op 9887957 B/op 9888732 B/op 1.00
BenchmarkDocument/array_10000 - allocs/op 120729 allocs/op 120733 allocs/op 1.00
BenchmarkDocument/array_gc_100 132231 ns/op 99888 B/op 1266 allocs/op 138538 ns/op 99883 B/op 1266 allocs/op 0.95
BenchmarkDocument/array_gc_100 - ns/op 132231 ns/op 138538 ns/op 0.95
BenchmarkDocument/array_gc_100 - B/op 99888 B/op 99883 B/op 1.00
BenchmarkDocument/array_gc_100 - allocs/op 1266 allocs/op 1266 allocs/op 1
BenchmarkDocument/array_gc_1000 1441637 ns/op 1140807 B/op 12926 allocs/op 1486987 ns/op 1140940 B/op 12926 allocs/op 0.97
BenchmarkDocument/array_gc_1000 - ns/op 1441637 ns/op 1486987 ns/op 0.97
BenchmarkDocument/array_gc_1000 - B/op 1140807 B/op 1140940 B/op 1.00
BenchmarkDocument/array_gc_1000 - allocs/op 12926 allocs/op 12926 allocs/op 1
BenchmarkDocument/counter_1000 199799 ns/op 178137 B/op 5771 allocs/op 214761 ns/op 178135 B/op 5771 allocs/op 0.93
BenchmarkDocument/counter_1000 - ns/op 199799 ns/op 214761 ns/op 0.93
BenchmarkDocument/counter_1000 - B/op 178137 B/op 178135 B/op 1.00
BenchmarkDocument/counter_1000 - allocs/op 5771 allocs/op 5771 allocs/op 1
BenchmarkDocument/counter_10000 2155896 ns/op 2068953 B/op 59778 allocs/op 2239619 ns/op 2068955 B/op 59778 allocs/op 0.96
BenchmarkDocument/counter_10000 - ns/op 2155896 ns/op 2239619 ns/op 0.96
BenchmarkDocument/counter_10000 - B/op 2068953 B/op 2068955 B/op 1.00
BenchmarkDocument/counter_10000 - allocs/op 59778 allocs/op 59778 allocs/op 1
BenchmarkDocument/object_1000 1410472 ns/op 1437055 B/op 9924 allocs/op 1502137 ns/op 1437065 B/op 9924 allocs/op 0.94
BenchmarkDocument/object_1000 - ns/op 1410472 ns/op 1502137 ns/op 0.94
BenchmarkDocument/object_1000 - B/op 1437055 B/op 1437065 B/op 1.00
BenchmarkDocument/object_1000 - allocs/op 9924 allocs/op 9924 allocs/op 1
BenchmarkDocument/object_10000 14762835 ns/op 12350786 B/op 101229 allocs/op 14970583 ns/op 12351293 B/op 101231 allocs/op 0.99
BenchmarkDocument/object_10000 - ns/op 14762835 ns/op 14970583 ns/op 0.99
BenchmarkDocument/object_10000 - B/op 12350786 B/op 12351293 B/op 1.00
BenchmarkDocument/object_10000 - allocs/op 101229 allocs/op 101231 allocs/op 1.00
BenchmarkDocument/tree_100 1025886 ns/op 951028 B/op 6102 allocs/op 1086572 ns/op 951029 B/op 6102 allocs/op 0.94
BenchmarkDocument/tree_100 - ns/op 1025886 ns/op 1086572 ns/op 0.94
BenchmarkDocument/tree_100 - B/op 951028 B/op 951029 B/op 1.00
BenchmarkDocument/tree_100 - allocs/op 6102 allocs/op 6102 allocs/op 1
BenchmarkDocument/tree_1000 75275517 ns/op 86582268 B/op 60112 allocs/op 80004380 ns/op 86582440 B/op 60112 allocs/op 0.94
BenchmarkDocument/tree_1000 - ns/op 75275517 ns/op 80004380 ns/op 0.94
BenchmarkDocument/tree_1000 - B/op 86582268 B/op 86582440 B/op 1.00
BenchmarkDocument/tree_1000 - allocs/op 60112 allocs/op 60112 allocs/op 1
BenchmarkDocument/tree_10000 9739083626 ns/op 8581181312 B/op 600185 allocs/op 9735221656 ns/op 8581190000 B/op 600181 allocs/op 1.00
BenchmarkDocument/tree_10000 - ns/op 9739083626 ns/op 9735221656 ns/op 1.00
BenchmarkDocument/tree_10000 - B/op 8581181312 B/op 8581190000 B/op 1.00
BenchmarkDocument/tree_10000 - allocs/op 600185 allocs/op 600181 allocs/op 1.00
BenchmarkDocument/tree_delete_all_1000 77455361 ns/op 87566254 B/op 75289 allocs/op 83089152 ns/op 87566814 B/op 75289 allocs/op 0.93
BenchmarkDocument/tree_delete_all_1000 - ns/op 77455361 ns/op 83089152 ns/op 0.93
BenchmarkDocument/tree_delete_all_1000 - B/op 87566254 B/op 87566814 B/op 1.00
BenchmarkDocument/tree_delete_all_1000 - allocs/op 75289 allocs/op 75289 allocs/op 1
BenchmarkDocument/tree_edit_gc_100 3797153 ns/op 4147830 B/op 15146 allocs/op 4112848 ns/op 4147845 B/op 15146 allocs/op 0.92
BenchmarkDocument/tree_edit_gc_100 - ns/op 3797153 ns/op 4112848 ns/op 0.92
BenchmarkDocument/tree_edit_gc_100 - B/op 4147830 B/op 4147845 B/op 1.00
BenchmarkDocument/tree_edit_gc_100 - allocs/op 15146 allocs/op 15146 allocs/op 1
BenchmarkDocument/tree_edit_gc_1000 317435856 ns/op 384042914 B/op 154942 allocs/op 333423655 ns/op 384045973 B/op 154946 allocs/op 0.95
BenchmarkDocument/tree_edit_gc_1000 - ns/op 317435856 ns/op 333423655 ns/op 0.95
BenchmarkDocument/tree_edit_gc_1000 - B/op 384042914 B/op 384045973 B/op 1.00
BenchmarkDocument/tree_edit_gc_1000 - allocs/op 154942 allocs/op 154946 allocs/op 1.00
BenchmarkDocument/tree_split_gc_100 2618322 ns/op 2407260 B/op 11131 allocs/op 2822208 ns/op 2407308 B/op 11131 allocs/op 0.93
BenchmarkDocument/tree_split_gc_100 - ns/op 2618322 ns/op 2822208 ns/op 0.93
BenchmarkDocument/tree_split_gc_100 - B/op 2407260 B/op 2407308 B/op 1.00
BenchmarkDocument/tree_split_gc_100 - allocs/op 11131 allocs/op 11131 allocs/op 1
BenchmarkDocument/tree_split_gc_1000 194758477 ns/op 222501189 B/op 122077 allocs/op 208949544 ns/op 222501939 B/op 122088 allocs/op 0.93
BenchmarkDocument/tree_split_gc_1000 - ns/op 194758477 ns/op 208949544 ns/op 0.93
BenchmarkDocument/tree_split_gc_1000 - B/op 222501189 B/op 222501939 B/op 1.00
BenchmarkDocument/tree_split_gc_1000 - allocs/op 122077 allocs/op 122088 allocs/op 1.00
BenchmarkRPC/client_to_server 427809920 ns/op 16151925 B/op 223995 allocs/op 420342208 ns/op 16174032 B/op 224153 allocs/op 1.02
BenchmarkRPC/client_to_server - ns/op 427809920 ns/op 420342208 ns/op 1.02
BenchmarkRPC/client_to_server - B/op 16151925 B/op 16174032 B/op 1.00
BenchmarkRPC/client_to_server - allocs/op 223995 allocs/op 224153 allocs/op 1.00
BenchmarkRPC/client_to_client_via_server 805208348 ns/op 35762852 B/op 480568 allocs/op 781785924 ns/op 36425352 B/op 477909 allocs/op 1.03
BenchmarkRPC/client_to_client_via_server - ns/op 805208348 ns/op 781785924 ns/op 1.03
BenchmarkRPC/client_to_client_via_server - B/op 35762852 B/op 36425352 B/op 0.98
BenchmarkRPC/client_to_client_via_server - allocs/op 480568 allocs/op 477909 allocs/op 1.01
BenchmarkRPC/attach_large_document 1287211425 ns/op 1921253200 B/op 12325 allocs/op 1310347894 ns/op 1918494488 B/op 12536 allocs/op 0.98
BenchmarkRPC/attach_large_document - ns/op 1287211425 ns/op 1310347894 ns/op 0.98
BenchmarkRPC/attach_large_document - B/op 1921253200 B/op 1918494488 B/op 1.00
BenchmarkRPC/attach_large_document - allocs/op 12325 allocs/op 12536 allocs/op 0.98
BenchmarkRPC/adminCli_to_server 553493288 ns/op 19910144 B/op 291131 allocs/op 542082074 ns/op 19959464 B/op 291124 allocs/op 1.02
BenchmarkRPC/adminCli_to_server - ns/op 553493288 ns/op 542082074 ns/op 1.02
BenchmarkRPC/adminCli_to_server - B/op 19910144 B/op 19959464 B/op 1.00
BenchmarkRPC/adminCli_to_server - allocs/op 291131 allocs/op 291124 allocs/op 1.00
BenchmarkLocker 82.26 ns/op 32 B/op 1 allocs/op 70.82 ns/op 16 B/op 1 allocs/op 1.16
BenchmarkLocker - ns/op 82.26 ns/op 70.82 ns/op 1.16
BenchmarkLocker - B/op 32 B/op 16 B/op 2
BenchmarkLocker - allocs/op 1 allocs/op 1 allocs/op 1
BenchmarkLockerParallel 45.65 ns/op 0 B/op 0 allocs/op 39.43 ns/op 0 B/op 0 allocs/op 1.16
BenchmarkLockerParallel - ns/op 45.65 ns/op 39.43 ns/op 1.16
BenchmarkLockerParallel - B/op 0 B/op 0 B/op 1
BenchmarkLockerParallel - allocs/op 0 allocs/op 0 allocs/op 1
BenchmarkLockerMoreKeys 188 ns/op 30 B/op 0 allocs/op 175.2 ns/op 15 B/op 0 allocs/op 1.07
BenchmarkLockerMoreKeys - ns/op 188 ns/op 175.2 ns/op 1.07
BenchmarkLockerMoreKeys - B/op 30 B/op 15 B/op 2
BenchmarkLockerMoreKeys - allocs/op 0 allocs/op 0 allocs/op 1
BenchmarkRWLocker/RWLock_rate_2 49.87 ns/op 0 B/op 0 allocs/op
BenchmarkRWLocker/RWLock_rate_2 - ns/op 49.87 ns/op
BenchmarkRWLocker/RWLock_rate_2 - B/op 0 B/op
BenchmarkRWLocker/RWLock_rate_2 - allocs/op 0 allocs/op
BenchmarkRWLocker/RWLock_rate_10 49.92 ns/op 0 B/op 0 allocs/op
BenchmarkRWLocker/RWLock_rate_10 - ns/op 49.92 ns/op
BenchmarkRWLocker/RWLock_rate_10 - B/op 0 B/op
BenchmarkRWLocker/RWLock_rate_10 - allocs/op 0 allocs/op
BenchmarkRWLocker/RWLock_rate_100 61.07 ns/op 2 B/op 0 allocs/op
BenchmarkRWLocker/RWLock_rate_100 - ns/op 61.07 ns/op
BenchmarkRWLocker/RWLock_rate_100 - B/op 2 B/op
BenchmarkRWLocker/RWLock_rate_100 - allocs/op 0 allocs/op
BenchmarkRWLocker/RWLock_rate_1000 90.23 ns/op 9 B/op 0 allocs/op
BenchmarkRWLocker/RWLock_rate_1000 - ns/op 90.23 ns/op
BenchmarkRWLocker/RWLock_rate_1000 - B/op 9 B/op
BenchmarkRWLocker/RWLock_rate_1000 - allocs/op 0 allocs/op
BenchmarkChange/Push_10_Changes 4708496 ns/op 150110 B/op 1625 allocs/op 4499701 ns/op 150249 B/op 1617 allocs/op 1.05
BenchmarkChange/Push_10_Changes - ns/op 4708496 ns/op 4499701 ns/op 1.05
BenchmarkChange/Push_10_Changes - B/op 150110 B/op 150249 B/op 1.00
BenchmarkChange/Push_10_Changes - allocs/op 1625 allocs/op 1617 allocs/op 1.00
BenchmarkChange/Push_100_Changes 16734576 ns/op 778761 B/op 8512 allocs/op 16060622 ns/op 778786 B/op 8505 allocs/op 1.04
BenchmarkChange/Push_100_Changes - ns/op 16734576 ns/op 16060622 ns/op 1.04
BenchmarkChange/Push_100_Changes - B/op 778761 B/op 778786 B/op 1.00
BenchmarkChange/Push_100_Changes - allocs/op 8512 allocs/op 8505 allocs/op 1.00
BenchmarkChange/Push_1000_Changes 132957683 ns/op 7250253 B/op 79331 allocs/op 127883322 ns/op 7174923 B/op 79324 allocs/op 1.04
BenchmarkChange/Push_1000_Changes - ns/op 132957683 ns/op 127883322 ns/op 1.04
BenchmarkChange/Push_1000_Changes - B/op 7250253 B/op 7174923 B/op 1.01
BenchmarkChange/Push_1000_Changes - allocs/op 79331 allocs/op 79324 allocs/op 1.00
BenchmarkChange/Pull_10_Changes 3764737 ns/op 123755 B/op 1452 allocs/op 3696210 ns/op 124527 B/op 1452 allocs/op 1.02
BenchmarkChange/Pull_10_Changes - ns/op 3764737 ns/op 3696210 ns/op 1.02
BenchmarkChange/Pull_10_Changes - B/op 123755 B/op 124527 B/op 0.99
BenchmarkChange/Pull_10_Changes - allocs/op 1452 allocs/op 1452 allocs/op 1
BenchmarkChange/Pull_100_Changes 5488146 ns/op 352907 B/op 5177 allocs/op 5298195 ns/op 354620 B/op 5178 allocs/op 1.04
BenchmarkChange/Pull_100_Changes - ns/op 5488146 ns/op 5298195 ns/op 1.04
BenchmarkChange/Pull_100_Changes - B/op 352907 B/op 354620 B/op 1.00
BenchmarkChange/Pull_100_Changes - allocs/op 5177 allocs/op 5178 allocs/op 1.00
BenchmarkChange/Pull_1000_Changes 11207152 ns/op 2194069 B/op 44678 allocs/op 10690325 ns/op 2195057 B/op 44678 allocs/op 1.05
BenchmarkChange/Pull_1000_Changes - ns/op 11207152 ns/op 10690325 ns/op 1.05
BenchmarkChange/Pull_1000_Changes - B/op 2194069 B/op 2195057 B/op 1.00
BenchmarkChange/Pull_1000_Changes - allocs/op 44678 allocs/op 44678 allocs/op 1
BenchmarkSnapshot/Push_3KB_snapshot 19578211 ns/op 911337 B/op 8516 allocs/op 18424239 ns/op 902145 B/op 8513 allocs/op 1.06
BenchmarkSnapshot/Push_3KB_snapshot - ns/op 19578211 ns/op 18424239 ns/op 1.06
BenchmarkSnapshot/Push_3KB_snapshot - B/op 911337 B/op 902145 B/op 1.01
BenchmarkSnapshot/Push_3KB_snapshot - allocs/op 8516 allocs/op 8513 allocs/op 1.00
BenchmarkSnapshot/Push_30KB_snapshot 138136053 ns/op 8434513 B/op 92250 allocs/op 131931516 ns/op 8149350 B/op 87133 allocs/op 1.05
BenchmarkSnapshot/Push_30KB_snapshot - ns/op 138136053 ns/op 131931516 ns/op 1.05
BenchmarkSnapshot/Push_30KB_snapshot - B/op 8434513 B/op 8149350 B/op 1.03
BenchmarkSnapshot/Push_30KB_snapshot - allocs/op 92250 allocs/op 87133 allocs/op 1.06
BenchmarkSnapshot/Pull_3KB_snapshot 7604615 ns/op 1114489 B/op 20061 allocs/op 7518925 ns/op 1115929 B/op 20060 allocs/op 1.01
BenchmarkSnapshot/Pull_3KB_snapshot - ns/op 7604615 ns/op 7518925 ns/op 1.01
BenchmarkSnapshot/Pull_3KB_snapshot - B/op 1114489 B/op 1115929 B/op 1.00
BenchmarkSnapshot/Pull_3KB_snapshot - allocs/op 20061 allocs/op 20060 allocs/op 1.00
BenchmarkSnapshot/Pull_30KB_snapshot 19948234 ns/op 9309033 B/op 193605 allocs/op 20158323 ns/op 9303400 B/op 193604 allocs/op 0.99
BenchmarkSnapshot/Pull_30KB_snapshot - ns/op 19948234 ns/op 20158323 ns/op 0.99
BenchmarkSnapshot/Pull_30KB_snapshot - B/op 9309033 B/op 9303400 B/op 1.00
BenchmarkSnapshot/Pull_30KB_snapshot - allocs/op 193605 allocs/op 193604 allocs/op 1.00
BenchmarkSplayTree/stress_test_100000 0.1899 ns/op 0 B/op 0 allocs/op 0.1981 ns/op 0 B/op 0 allocs/op 0.96
BenchmarkSplayTree/stress_test_100000 - ns/op 0.1899 ns/op 0.1981 ns/op 0.96
BenchmarkSplayTree/stress_test_100000 - B/op 0 B/op 0 B/op 1
BenchmarkSplayTree/stress_test_100000 - allocs/op 0 allocs/op 0 allocs/op 1
BenchmarkSplayTree/stress_test_200000 0.3965 ns/op 0 B/op 0 allocs/op 0.3762 ns/op 0 B/op 0 allocs/op 1.05
BenchmarkSplayTree/stress_test_200000 - ns/op 0.3965 ns/op 0.3762 ns/op 1.05
BenchmarkSplayTree/stress_test_200000 - B/op 0 B/op 0 B/op 1
BenchmarkSplayTree/stress_test_200000 - allocs/op 0 allocs/op 0 allocs/op 1
BenchmarkSplayTree/stress_test_300000 0.5904 ns/op 0 B/op 0 allocs/op 0.5689 ns/op 0 B/op 0 allocs/op 1.04
BenchmarkSplayTree/stress_test_300000 - ns/op 0.5904 ns/op 0.5689 ns/op 1.04
BenchmarkSplayTree/stress_test_300000 - B/op 0 B/op 0 B/op 1
BenchmarkSplayTree/stress_test_300000 - allocs/op 0 allocs/op 0 allocs/op 1
BenchmarkSplayTree/random_access_100000 0.01265 ns/op 0 B/op 0 allocs/op 0.01259 ns/op 0 B/op 0 allocs/op 1.00
BenchmarkSplayTree/random_access_100000 - ns/op 0.01265 ns/op 0.01259 ns/op 1.00
BenchmarkSplayTree/random_access_100000 - B/op 0 B/op 0 B/op 1
BenchmarkSplayTree/random_access_100000 - allocs/op 0 allocs/op 0 allocs/op 1
BenchmarkSplayTree/random_access_200000 0.03209 ns/op 0 B/op 0 allocs/op 0.02909 ns/op 0 B/op 0 allocs/op 1.10
BenchmarkSplayTree/random_access_200000 - ns/op 0.03209 ns/op 0.02909 ns/op 1.10
BenchmarkSplayTree/random_access_200000 - B/op 0 B/op 0 B/op 1
BenchmarkSplayTree/random_access_200000 - allocs/op 0 allocs/op 0 allocs/op 1
BenchmarkSplayTree/random_access_300000 0.04509 ns/op 0 B/op 0 allocs/op 0.04425 ns/op 0 B/op 0 allocs/op 1.02
BenchmarkSplayTree/random_access_300000 - ns/op 0.04509 ns/op 0.04425 ns/op 1.02
BenchmarkSplayTree/random_access_300000 - B/op 0 B/op 0 B/op 1
BenchmarkSplayTree/random_access_300000 - allocs/op 0 allocs/op 0 allocs/op 1
BenchmarkSplayTree/editing_trace_bench 0.001891 ns/op 0 B/op 0 allocs/op 0.002184 ns/op 0 B/op 0 allocs/op 0.87
BenchmarkSplayTree/editing_trace_bench - ns/op 0.001891 ns/op 0.002184 ns/op 0.87
BenchmarkSplayTree/editing_trace_bench - B/op 0 B/op 0 B/op 1
BenchmarkSplayTree/editing_trace_bench - allocs/op 0 allocs/op 0 allocs/op 1
BenchmarkSync/memory_sync_10_test 7232 ns/op 1342 B/op 35 allocs/op 6919 ns/op 1183 B/op 35 allocs/op 1.05
BenchmarkSync/memory_sync_10_test - ns/op 7232 ns/op 6919 ns/op 1.05
BenchmarkSync/memory_sync_10_test - B/op 1342 B/op 1183 B/op 1.13
BenchmarkSync/memory_sync_10_test - allocs/op 35 allocs/op 35 allocs/op 1
BenchmarkSync/memory_sync_100_test 56244 ns/op 9516 B/op 268 allocs/op 53447 ns/op 8529 B/op 269 allocs/op 1.05
BenchmarkSync/memory_sync_100_test - ns/op 56244 ns/op 53447 ns/op 1.05
BenchmarkSync/memory_sync_100_test - B/op 9516 B/op 8529 B/op 1.12
BenchmarkSync/memory_sync_100_test - allocs/op 268 allocs/op 269 allocs/op 1.00
BenchmarkSync/memory_sync_1000_test 627229 ns/op 75814 B/op 2107 allocs/op 575996 ns/op 74876 B/op 2146 allocs/op 1.09
BenchmarkSync/memory_sync_1000_test - ns/op 627229 ns/op 575996 ns/op 1.09
BenchmarkSync/memory_sync_1000_test - B/op 75814 B/op 74876 B/op 1.01
BenchmarkSync/memory_sync_1000_test - allocs/op 2107 allocs/op 2146 allocs/op 0.98
BenchmarkSync/memory_sync_10000_test 7821553 ns/op 747220 B/op 20373 allocs/op 7468624 ns/op 754372 B/op 20505 allocs/op 1.05
BenchmarkSync/memory_sync_10000_test - ns/op 7821553 ns/op 7468624 ns/op 1.05
BenchmarkSync/memory_sync_10000_test - B/op 747220 B/op 754372 B/op 0.99
BenchmarkSync/memory_sync_10000_test - allocs/op 20373 allocs/op 20505 allocs/op 0.99
BenchmarkTextEditing 5205550318 ns/op 3922660280 B/op 20619736 allocs/op 5297178369 ns/op 3922671656 B/op 20619745 allocs/op 0.98
BenchmarkTextEditing - ns/op 5205550318 ns/op 5297178369 ns/op 0.98
BenchmarkTextEditing - B/op 3922660280 B/op 3922671656 B/op 1.00
BenchmarkTextEditing - allocs/op 20619736 allocs/op 20619745 allocs/op 1.00
BenchmarkTree/10000_vertices_to_protobuf 4332493 ns/op 6363241 B/op 70025 allocs/op 4391932 ns/op 6363240 B/op 70025 allocs/op 0.99
BenchmarkTree/10000_vertices_to_protobuf - ns/op 4332493 ns/op 4391932 ns/op 0.99
BenchmarkTree/10000_vertices_to_protobuf - B/op 6363241 B/op 6363240 B/op 1.00
BenchmarkTree/10000_vertices_to_protobuf - allocs/op 70025 allocs/op 70025 allocs/op 1
BenchmarkTree/10000_vertices_from_protobuf 223195553 ns/op 442305963 B/op 290039 allocs/op 223587457 ns/op 442304264 B/op 290038 allocs/op 1.00
BenchmarkTree/10000_vertices_from_protobuf - ns/op 223195553 ns/op 223587457 ns/op 1.00
BenchmarkTree/10000_vertices_from_protobuf - B/op 442305963 B/op 442304264 B/op 1.00
BenchmarkTree/10000_vertices_from_protobuf - allocs/op 290039 allocs/op 290038 allocs/op 1.00
BenchmarkTree/20000_vertices_to_protobuf 9069463 ns/op 12890903 B/op 140028 allocs/op 9415715 ns/op 12890832 B/op 140028 allocs/op 0.96
BenchmarkTree/20000_vertices_to_protobuf - ns/op 9069463 ns/op 9415715 ns/op 0.96
BenchmarkTree/20000_vertices_to_protobuf - B/op 12890903 B/op 12890832 B/op 1.00
BenchmarkTree/20000_vertices_to_protobuf - allocs/op 140028 allocs/op 140028 allocs/op 1
BenchmarkTree/20000_vertices_from_protobuf 894683880 ns/op 1697474200 B/op 580044 allocs/op 889504614 ns/op 1697478812 B/op 580089 allocs/op 1.01
BenchmarkTree/20000_vertices_from_protobuf - ns/op 894683880 ns/op 889504614 ns/op 1.01
BenchmarkTree/20000_vertices_from_protobuf - B/op 1697474200 B/op 1697478812 B/op 1.00
BenchmarkTree/20000_vertices_from_protobuf - allocs/op 580044 allocs/op 580089 allocs/op 1.00
BenchmarkTree/30000_vertices_to_protobuf 14449693 ns/op 18976186 B/op 210029 allocs/op 13707963 ns/op 18976262 B/op 210029 allocs/op 1.05
BenchmarkTree/30000_vertices_to_protobuf - ns/op 14449693 ns/op 13707963 ns/op 1.05
BenchmarkTree/30000_vertices_to_protobuf - B/op 18976186 B/op 18976262 B/op 1.00
BenchmarkTree/30000_vertices_to_protobuf - allocs/op 210029 allocs/op 210029 allocs/op 1
BenchmarkTree/30000_vertices_from_protobuf 2011961312 ns/op 3751744064 B/op 870142 allocs/op 1983871431 ns/op 3751747496 B/op 870094 allocs/op 1.01
BenchmarkTree/30000_vertices_from_protobuf - ns/op 2011961312 ns/op 1983871431 ns/op 1.01
BenchmarkTree/30000_vertices_from_protobuf - B/op 3751744064 B/op 3751747496 B/op 1.00
BenchmarkTree/30000_vertices_from_protobuf - allocs/op 870142 allocs/op 870094 allocs/op 1.00

This comment was automatically generated by workflow using github-action-benchmark.

Copy link

codecov bot commented Feb 6, 2025

Codecov Report

Attention: Patch coverage is 17.28395% with 134 lines in your changes missing coverage. Please review.

Project coverage is 38.49%. Comparing base (b4da816) to head (dafd831).
Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
server/backend/database/testcases/testcases.go 0.00% 35 Missing ⚠️
server/backend/messagebroker/kafka.go 0.00% 22 Missing ⚠️
server/backend/messagebroker/config.go 0.00% 20 Missing ⚠️
cmd/yorkie/server.go 0.00% 18 Missing ⚠️
server/backend/messagebroker/messagebroker.go 20.00% 16 Missing ⚠️
api/yorkie/v1/yorkie.pb.go 22.22% 7 Missing ⚠️
server/backend/backend.go 0.00% 7 Missing ⚠️
server/config.go 0.00% 4 Missing ⚠️
server/rpc/yorkie_server.go 82.35% 2 Missing and 1 partial ⚠️
server/backend/database/client_info.go 0.00% 1 Missing ⚠️
... and 1 more
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1143      +/-   ##
==========================================
- Coverage   38.54%   38.49%   -0.05%     
==========================================
  Files         165      169       +4     
  Lines       25247    25413     +166     
==========================================
+ Hits         9731     9784      +53     
- Misses      14698    14808     +110     
- Partials      818      821       +3     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@hackerwins
Copy link
Member

I changed it to ready temporarily to activate CodeRabbit reviews.

@hackerwins hackerwins marked this pull request as ready for review February 6, 2025 11:42
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (9)
server/backend/messagebroker/config.go (1)

20-23: Use consistent naming convention in YAML tags.

The YAML tags use PascalCase (ConnectionURL) while typical YAML conventions favor snake_case or kebab-case.

 type Config struct {
-	Address string `yaml:"ConnectionURL"`
-	Topic   string `yaml:"Topic"`
+	Address string `yaml:"connection_url"`
+	Topic   string `yaml:"topic"`
 }
server/backend/messagebroker/messagebroker.go (1)

59-72: Enhance error handling in Ensure function.

The Ensure function should:

  1. Validate the configuration before creating the broker
  2. Add more detailed logging for debugging
 func Ensure(kafkaConf *Config) Broker {
 	if kafkaConf == nil || kafkaConf.Address == "" {
+		logging.DefaultLogger().Info("no kafka configuration provided, using dummy broker")
 		return &DummyBroker{}
 	}

+	if err := kafkaConf.Validate(); err != nil {
+		logging.DefaultLogger().Warnf("invalid kafka configuration: %v, using dummy broker", err)
+		return &DummyBroker{}
+	}
+
 	logging.DefaultLogger().Infof(
 		"connecting to kafka: %s, topic: %s",
 		kafkaConf.Address,
 		kafkaConf.Topic,
 	)

 	return newKafkaBroker(kafkaConf.Address, kafkaConf.Topic)
 }
api/types/events/events.go (1)

72-84: Enhance ClientEvent type with additional metadata.

The ClientEvent struct could be more comprehensive to include:

  1. Timestamp
  2. Client metadata
  3. String() method for ClientEventType
+// String returns the string representation of ClientEventType
+func (t ClientEventType) String() string {
+	return string(t)
+}

 type ClientEvent struct {
 	// Type is the type of the event.
 	Type ClientEventType
+	// Add additional fields
+	Timestamp  time.Time
+	ClientID   string
+	Metadata   map[string]interface{}
 }
server/backend/backend.go (1)

151-153: Consider enhancing message broker initialization.

The broker initialization could benefit from additional error handling and context for graceful shutdown.

Consider modifying the initialization to:

  1. Return any initialization errors
  2. Accept a context for graceful shutdown
-// 08. Create the message broker instance.
-broker := messagebroker.Ensure(kafkaConf)
+// 08. Create the message broker instance with context for graceful shutdown.
+broker, err := messagebroker.New(context.Background(), kafkaConf)
+if err != nil {
+    return nil, fmt.Errorf("failed to initialize message broker: %w", err)
+}
server/config.go (1)

79-84: Consider adding default values for Kafka configuration.

While the Kafka configuration field is correctly added, there are no default values set in the ensureDefaultValue method. This could make it harder for users to get started with the Kafka integration.

Consider adding default values similar to other configurations:

 func (c *Config) ensureDefaultValue() {
+    if c.Kafka != nil {
+        if c.Kafka.Address == "" {
+            c.Kafka.Address = "localhost:29092"
+        }
+        if c.Kafka.Topic == "" {
+            c.Kafka.Topic = "user-events"
+        }
+    }
test/bench/push_pull_bench_test.go (1)

66-66: Consider adding Kafka performance benchmarks.

While passing nil for Kafka config is correct for existing benchmarks, consider adding performance benchmarks for the Kafka integration to measure its impact on the system.

Consider adding benchmarks that:

  1. Measure latency with and without Kafka integration
  2. Measure throughput of user event publishing
  3. Evaluate different Kafka configurations
cmd/yorkie/server.go (1)

89-94: Consider adding validation for Kafka configuration.

The configuration block is correctly implemented. However, consider adding validation for the Kafka configuration to ensure the address format is valid and the topic name meets Kafka's requirements.

 if kafkaAddress != "" && kafkaTopic != "" {
+    // Validate Kafka address format
+    if !strings.Contains(kafkaAddress, ":") {
+        return fmt.Errorf("invalid kafka address format: %s", kafkaAddress)
+    }
+    // Validate topic name (Kafka topics cannot be empty and have character limitations)
+    if len(kafkaTopic) == 0 || len(kafkaTopic) > 249 {
+        return fmt.Errorf("invalid kafka topic name: %s", kafkaTopic)
+    }
     conf.Kafka = &messagebroker.Config{
         Address: kafkaAddress,
         Topic:   kafkaTopic,
     }
 }
build/docker/analytics/init-user-events-db.sql (1)

11-11: Consider adding an index for analytics queries.

The metadata column type change to JSON is appropriate for storing structured metadata. However, for analytics queries, consider adding an index on the timestamp column to optimize MAU calculations.

CREATE INDEX idx_user_events_timestamp ON user_events (timestamp);
api/yorkie/v1/yorkie.proto (1)

44-45: Add field documentation.

The new fields are correctly added and maintain backward compatibility. However, consider adding documentation comments to describe:

  1. The purpose and format of user_id
  2. Expected key-value pairs in the metadata map
 message ActivateClientRequest {
   string client_key = 1;
+  // Unique identifier for the user. If not provided, client_id will be used.
   string user_id = 2;
+  // Additional metadata about the client activation.
+  // Keys should be lowercase and use underscores.
+  // Common keys: device_type, app_version, os_version
   map<string, string> metadata = 3;
 }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ccf25f0 and 252de54.

⛔ Files ignored due to path filters (2)
  • api/yorkie/v1/yorkie.pb.go is excluded by !**/*.pb.go
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (24)
  • api/docs/yorkie/v1/admin.openapi.yaml (58 hunks)
  • api/docs/yorkie/v1/cluster.openapi.yaml (7 hunks)
  • api/docs/yorkie/v1/resources.openapi.yaml (67 hunks)
  • api/docs/yorkie/v1/yorkie.openapi.yaml (53 hunks)
  • api/types/events/events.go (1 hunks)
  • api/yorkie/v1/yorkie.proto (1 hunks)
  • build/docker/analytics/docker-compose.yml (3 hunks)
  • build/docker/analytics/init-user-events-db.sql (1 hunks)
  • cmd/yorkie/server.go (4 hunks)
  • go.mod (3 hunks)
  • server/backend/backend.go (6 hunks)
  • server/backend/messagebroker/config.go (1 hunks)
  • server/backend/messagebroker/dummy.go (1 hunks)
  • server/backend/messagebroker/kafka.go (1 hunks)
  • server/backend/messagebroker/messagebroker.go (1 hunks)
  • server/config.go (3 hunks)
  • server/config.sample.yml (1 hunks)
  • server/packs/packs_test.go (1 hunks)
  • server/rpc/server_test.go (1 hunks)
  • server/rpc/yorkie_server.go (2 hunks)
  • server/server.go (1 hunks)
  • test/bench/push_pull_bench_test.go (1 hunks)
  • test/complex/main_test.go (1 hunks)
  • test/integration/housekeeping_test.go (1 hunks)
✅ Files skipped from review due to trivial changes (5)
  • api/docs/yorkie/v1/yorkie.openapi.yaml
  • server/backend/messagebroker/dummy.go
  • api/docs/yorkie/v1/admin.openapi.yaml
  • api/docs/yorkie/v1/cluster.openapi.yaml
  • api/docs/yorkie/v1/resources.openapi.yaml
🔇 Additional comments (20)
test/integration/housekeeping_test.go (1)

61-67: LGTM!

The addition of nil as the last argument to backend.New() is correct for test scenarios where Kafka functionality is not required.

server/server.go (1)

63-69: LGTM!

The addition of conf.Kafka to backend.New() correctly integrates the Kafka configuration into the server initialization process.

test/complex/main_test.go (1)

75-96: LGTM!

The addition of nil as the Kafka configuration is appropriate for these complex test scenarios where Kafka functionality is not needed.

server/backend/backend.go (2)

65-66: LGTM!

The MsgBroker field is well-documented and properly typed.


200-202: LGTM!

The error handling during shutdown is appropriate, logging errors without failing the shutdown process.

server/config.go (2)

30-30: LGTM!

The import of the messagebroker package is correctly added.


138-142: LGTM!

The validation logic for Kafka configuration is correctly implemented, ensuring that if Kafka config is provided, it is valid.

server/rpc/server_test.go (1)

93-93: Consider adding Kafka integration tests.

While passing nil for Kafka config is correct for non-Kafka test cases, consider adding test cases that verify the Kafka integration functionality.

Would you like me to help create test cases for the Kafka integration? This could include:

  1. Tests with valid Kafka configuration
  2. Tests with invalid Kafka configuration
  3. Tests verifying user events are correctly published to Kafka
server/packs/packs_test.go (1)

119-119: LGTM!

The addition of nil for Kafka config is consistent with other test files.

cmd/yorkie/server.go (2)

30-30: LGTM!

The import statement is correctly placed and follows Go's import grouping conventions.


57-58: LGTM!

The Kafka configuration variables are correctly declared and follow Go naming conventions.

server/rpc/yorkie_server.go (1)

33-33: LGTM!

The import statement is correctly placed and follows Go's import grouping conventions.

go.mod (3)

19-19: New Kafka Dependency Added.
The addition of github.com/segmentio/kafka-go v0.4.47 supports the new Kafka producer functionality for Yorkie analytics. Please ensure that the Kafka producer configuration in the code consistently uses this library's API.


35-35: New Indirect lz4 Dependency.
The indirect dependency github.com/pierrec/lz4/v4 v4.1.15 has been added. Verify that any runtime components relying on lz4 compression are compatible with this version.


73-74: Updated scram and stringprep Dependencies.
The versions for github.com/xdg-go/scram and github.com/xdg-go/stringprep have been bumped to v1.1.2 and v1.0.4, respectively. Please confirm that these updates do not introduce breaking changes for modules that utilize these libraries.

server/config.sample.yml (1)

109-116: New Kafka Message Broker Configuration Section.
A new MessageBroker section has been added with parameters ConnectionURL set to "localhost:29092" and Topic set to "user-events". This configuration is essential for Kafka integration in the backend. Please ensure that the field names and default values match the expectations in the server initialization and Kafka producer setup.

build/docker/analytics/docker-compose.yml (4)

52-52: Kafka Port Mapping Update.
The Kafka service port mapping has been updated to "29092:29092", which aligns with the new configuration. Verify that all client connections and configuration parameters correctly reference this port.


61-63: Kafka Listener Configuration Update.
The environment variables for Kafka now include:

  • KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093,EXTERNAL://:29092
  • KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,EXTERNAL://localhost:29092
  • KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,EXTERNAL:PLAINTEXT,PLAINTEXT:PLAINTEXT

These changes are critical for exposing an external listener on port 29092. Please review these settings to ensure that external connectors can reliably access the Kafka broker.


67-67: Kafka Healthcheck Command Adjustment.
The healthcheck command for the Kafka service now uses kafka-topics.sh --bootstrap-server localhost:9092 --list. Confirm that using localhost in this context correctly resolves within the container network and meets the health monitoring requirements.


144-144: Routine Load Healthcheck Enhancement.
The routine load check in the init-starrocks-database service now accepts either "State: NEED_SCHEDULE" or "State: RUNNING". This flexible check improves robustness by acknowledging multiple valid states. Ensure that these conditions accurately capture the operational statuses expected during initialization.

Comment on lines 25 to 29
// Validate validates this config.
func (c *Config) Validate() error {
// TODO(hackerwins): Implement this.
return nil
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Implement validation for required configuration.

The Validate method should check for required fields and validate the Kafka connection URL format.

 func (c *Config) Validate() error {
-	// TODO(hackerwins): Implement this.
-	return nil
+	if c.Address == "" {
+		return fmt.Errorf("kafka address is required")
+	}
+	if c.Topic == "" {
+		return fmt.Errorf("kafka topic is required")
+	}
+	return nil
 }

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines 31 to 40
// newKafkaBroker creates a new instance of KafkaProducer.
func newKafkaBroker(address string, topic string) *KafkaBroker {
return &KafkaBroker{
writer: &kafka.Writer{
Addr: kafka.TCP(address),
Topic: topic,
Balancer: &kafka.LeastBytes{},
},
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add robust Kafka writer configuration.

The current writer configuration lacks important settings for production use:

  • Retry mechanism
  • Write timeout
  • Batch size configuration
 func newKafkaBroker(address string, topic string) *KafkaBroker {
 	return &KafkaBroker{
 		writer: &kafka.Writer{
 			Addr:     kafka.TCP(address),
 			Topic:    topic,
 			Balancer: &kafka.LeastBytes{},
+			// Add production-ready configurations
+			BatchTimeout: time.Second * 1,
+			WriteTimeout: time.Second * 10,
+			RequiredAcks: kafka.RequireOne,
+			MaxAttempts:  3,
 		},
 	}
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// newKafkaBroker creates a new instance of KafkaProducer.
func newKafkaBroker(address string, topic string) *KafkaBroker {
return &KafkaBroker{
writer: &kafka.Writer{
Addr: kafka.TCP(address),
Topic: topic,
Balancer: &kafka.LeastBytes{},
},
}
}
// newKafkaBroker creates a new instance of KafkaProducer.
func newKafkaBroker(address string, topic string) *KafkaBroker {
return &KafkaBroker{
writer: &kafka.Writer{
Addr: kafka.TCP(address),
Topic: topic,
Balancer: &kafka.LeastBytes{},
// Add production-ready configurations
BatchTimeout: time.Second * 1,
WriteTimeout: time.Second * 10,
RequiredAcks: kafka.RequireOne,
MaxAttempts: 3,
},
}
}

Comment on lines +42 to +58
// Produce produces a user event to Kafka.
func (mb *KafkaBroker) Produce(
ctx context.Context,
msg Message,
) error {
value, err := msg.Marshal()
if err != nil {
return fmt.Errorf("marshal message: %v", err)
}

// TODO(hackerwins): Consider using message batching.
if err := mb.writer.WriteMessages(ctx, kafka.Message{Value: value}); err != nil {
return fmt.Errorf("write message to kafka: %v", err)
}

return nil
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enhance error handling and implement message batching.

The Produce method needs improvements:

  1. Check for nil writer
  2. Consider implementing message batching as noted in TODO
  3. Add context timeout for WriteMessages
 func (mb *KafkaBroker) Produce(
 	ctx context.Context,
 	msg Message,
 ) error {
+	if mb.writer == nil {
+		return fmt.Errorf("kafka writer is not initialized")
+	}
+
 	value, err := msg.Marshal()
 	if err != nil {
 		return fmt.Errorf("marshal message: %v", err)
 	}

-	// TODO(hackerwins): Consider using message batching.
+	// Add timeout to prevent hanging
+	ctx, cancel := context.WithTimeout(ctx, time.Second*5)
+	defer cancel()
+
 	if err := mb.writer.WriteMessages(ctx, kafka.Message{Value: value}); err != nil {
 		return fmt.Errorf("write message to kafka: %v", err)
 	}

 	return nil
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Produce produces a user event to Kafka.
func (mb *KafkaBroker) Produce(
ctx context.Context,
msg Message,
) error {
value, err := msg.Marshal()
if err != nil {
return fmt.Errorf("marshal message: %v", err)
}
// TODO(hackerwins): Consider using message batching.
if err := mb.writer.WriteMessages(ctx, kafka.Message{Value: value}); err != nil {
return fmt.Errorf("write message to kafka: %v", err)
}
return nil
}
// Produce produces a user event to Kafka.
func (mb *KafkaBroker) Produce(
ctx context.Context,
msg Message,
) error {
if mb.writer == nil {
return fmt.Errorf("kafka writer is not initialized")
}
value, err := msg.Marshal()
if err != nil {
return fmt.Errorf("marshal message: %v", err)
}
// Add timeout to prevent hanging
ctx, cancel := context.WithTimeout(ctx, time.Second*5)
defer cancel()
if err := mb.writer.WriteMessages(ctx, kafka.Message{Value: value}); err != nil {
return fmt.Errorf("write message to kafka: %v", err)
}
return nil
}

Comment on lines 34 to 41
// UserEventMessage represents a message for user events
type UserEventMessage struct {
UserID string `json:"user_id"`
EventType events.ClientEventType `json:"event_type"`
ProjectID string `json:"project_id"`
UserAgent string `json:"user_agent"`
Metadata map[string]string `json:"metadata"`
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add validation for UserEventMessage fields.

The UserEventMessage struct should validate its required fields before marshaling.

 type UserEventMessage struct {
 	UserID    string                 `json:"user_id"`
 	EventType events.ClientEventType `json:"event_type"`
 	ProjectID string                 `json:"project_id"`
 	UserAgent string                 `json:"user_agent"`
 	Metadata  map[string]string      `json:"metadata"`
+
+	// Add validation method
+	Validate() error
 }

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines 78 to 90
if err := s.backend.MsgBroker.Produce(
ctx,
messagebroker.UserEventMessage{
// TODO(hackerwins): Use client ID as user ID if it is empty.
UserID: req.Msg.UserId,
EventType: events.ClientActivatedEvent,
ProjectID: project.ID.String(),
UserAgent: req.Header().Get("x-yorkie-user-agent"),
Metadata: req.Msg.Metadata,
},
); err != nil {
logging.From(ctx).Error(err)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Address TODO and improve error handling.

  1. The TODO comment about using client ID as user ID should be addressed before merging.
  2. Consider whether error in message production should affect the client activation.
-    if err := s.backend.MsgBroker.Produce(
+    userID := req.Msg.UserId
+    if userID == "" {
+        userID = cli.ID.String()
+    }
+    err = s.backend.MsgBroker.Produce(
         ctx,
         messagebroker.UserEventMessage{
-            // TODO(hackerwins): Use client ID as user ID if it is empty.
-            UserID:    req.Msg.UserId,
+            UserID:    userID,
             EventType: events.ClientActivatedEvent,
             ProjectID: project.ID.String(),
             UserAgent: req.Header().Get("x-yorkie-user-agent"),
             Metadata:  req.Msg.Metadata,
         },
-    ); err != nil {
-        logging.From(ctx).Error(err)
-    }
+    )
+    if err != nil {
+        // Log the error but don't fail the activation
+        logging.From(ctx).Errorf("failed to produce user event: %v", err)
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if err := s.backend.MsgBroker.Produce(
ctx,
messagebroker.UserEventMessage{
// TODO(hackerwins): Use client ID as user ID if it is empty.
UserID: req.Msg.UserId,
EventType: events.ClientActivatedEvent,
ProjectID: project.ID.String(),
UserAgent: req.Header().Get("x-yorkie-user-agent"),
Metadata: req.Msg.Metadata,
},
); err != nil {
logging.From(ctx).Error(err)
}
if userID := req.Msg.UserId; userID == "" {
userID = cli.ID.String()
}
err = s.backend.MsgBroker.Produce(
ctx,
messagebroker.UserEventMessage{
UserID: req.Msg.UserId,
EventType: events.ClientActivatedEvent,
ProjectID: project.ID.String(),
UserAgent: req.Header().Get("x-yorkie-user-agent"),
Metadata: req.Msg.Metadata,
},
)
if err != nil {
// Log the error but don't fail the activation
logging.From(ctx).Errorf("failed to produce user event: %v", err)
}
Suggested change
if err := s.backend.MsgBroker.Produce(
ctx,
messagebroker.UserEventMessage{
// TODO(hackerwins): Use client ID as user ID if it is empty.
UserID: req.Msg.UserId,
EventType: events.ClientActivatedEvent,
ProjectID: project.ID.String(),
UserAgent: req.Header().Get("x-yorkie-user-agent"),
Metadata: req.Msg.Metadata,
},
); err != nil {
logging.From(ctx).Error(err)
}
userID := req.Msg.UserId
if userID == "" {
userID = cli.ID.String()
}
err = s.backend.MsgBroker.Produce(
ctx,
messagebroker.UserEventMessage{
UserID: userID,
EventType: events.ClientActivatedEvent,
ProjectID: project.ID.String(),
UserAgent: req.Header().Get("x-yorkie-user-agent"),
Metadata: req.Msg.Metadata,
},
)
if err != nil {
// Log the error but don't fail the activation
logging.From(ctx).Errorf("failed to produce user event: %v", err)
}

@hackerwins hackerwins marked this pull request as draft February 6, 2025 11:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants