Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Connection for Kafka source & sink #19270

Merged
merged 70 commits into from
Nov 28, 2024
Merged

feat: Connection for Kafka source & sink #19270

merged 70 commits into from
Nov 28, 2024

Conversation

tabVersion
Copy link
Contributor

@tabVersion tabVersion commented Nov 5, 2024

I hereby agree to the terms of the RisingWave Labs, Inc. Contributor License Agreement.

What's changed and what's your intention?

following #18975

Checklist

  • I have written necessary rustdoc comments
  • I have added necessary unit tests and integration tests
  • I have added test labels as necessary. See details.
  • I have added fuzzing tests or opened an issue to track them. (Optional, recommended for new SQL features Sqlsmith: Sql feature generation #7934).
  • My PR contains breaking changes. (If it deprecates some features, please create a tracking issue to remove them in the future).
  • All checks passed in ./risedev check (or alias, ./risedev c)
  • My PR changes performance-critical code. (Please run macro/micro-benchmarks and show the results.)
  • My PR contains critical fixes that are necessary to be merged into the latest release. (Please check out the details)

Documentation

  • My PR needs documentation updates. (Please use the Release note section below to summarize the impact on users)

Release note

introducing a new catalog CONNECTION. and have integrated with SECRET

please note that the legacy create connection to AWS PrivateLink has been deprecated in #18975.

new syntax

CREATE CONNECTION [ IF NOT EXISTS ] <connection name> with (
  type = 'kafka' / 'iceberg'
  some_atter_1 = secret <secret name>,
  some_attr_2 = '...'
);

planned support for Kafka, iceberg (by @chenzl25 ) and FS (@wcy-fdu ) at the first stage.


when creating source/sink from a connection, connector must match with the connection type & the ref key must be connection

create source s ( ... ) with (
  connector = 'kafka', # <- match with connection type
  connection = <connection name> # <- the key must be `profile` and use `connection` to mark ref to connection catalog. 
) format ... encode ...;

and the attrs defined in connection and source/table/sink cannot have overlap

create connection conn with ( type = 'kafka', a = 'a', b = 'b' );

# reject as overlap key `a` and `b`
create source s with ( connector = 'kafka', a = '1', 'b' = '2' , connection = conn ) format ... encode ... ;
  • one more thing
    • we still allow creating source/table/sink without connection and the syntax remains unchanged.

to perform connection validate, we need a new kafka ACL: DESCRIBE CLUSTER (the privilege is auth via username and password, not related with consumer group. )


Connection stores KV in the catalog and validation only takes a copy.
When building source/sink/table, we first fill the KVs in connection catalog to the with options and then start the create procedure.

accepted kafka connection props

(connection related)

  • properties.bootstrap.server (only required)
  • properties.security.protocol
  • properties.ssl.endpoint.identification.algorithm
  • properties.ssl.ca.location
  • properties.ssl.ca.pem
  • properties.ssl.certificate.location
  • properties.ssl.certificate.pem
  • properties.ssl.key.location
  • properties.ssl.key.pem
  • properties.ssl.key.password
  • properties.sasl.mechanism
  • properties.sasl.username
  • properties.sasl.password
  • properties.sasl.kerberos.service.name
  • properties.sasl.kerberos.keytab
  • properties.sasl.kerberos.principal
  • properties.sasl.kerberos.kinit.cmd
  • properties.sasl.kerberos.min.time.before.relogin
  • properties.sasl.oauthbearer.config

(private link related)

  • privatelink.targets
  • privatelink.endpoint

handle_create_connection will do the private link resolve and remove both privatelink.targets and privatelink.endpoint and insert the broker.rewrite.endpoints to the props.

so if users specify privatelink.targets and privatelink.endpoint in connection, they cannot set it again when create source/table/sink.

(aws auth related: for msk)

  • aws.region
  • endpoint
  • aws.credentials.access_key_id
  • aws.credentials.secret_access_key
  • aws.credentials.session_token
  • aws.credentials.role.arn
  • aws.credentials.role.external_id

tabVersion and others added 30 commits October 22, 2024 21:37
Signed-off-by: tabVersion <[email protected]>
Signed-off-by: tabVersion <[email protected]>
Signed-off-by: tabVersion <[email protected]>
Signed-off-by: tabVersion <[email protected]>
Signed-off-by: tabVersion <[email protected]>
fix
Signed-off-by: tabversion <[email protected]>
Signed-off-by: tabVersion <[email protected]>
Signed-off-by: tabVersion <[email protected]>
Signed-off-by: tabVersion <[email protected]>
@tabVersion
Copy link
Contributor Author

Please add or update some e2e test to cover this feature.

Does e2e_test/source_inline/connection/ddl.slt lack some scenes?

Copy link
Member

@fuyufjh fuyufjh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM

@@ -246,6 +269,7 @@ message Connection {
string name = 4;
oneof info {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, then let's place connection_params outside the oneof at least

@@ -156,6 +161,9 @@ message SinkFormatDesc {
optional plan_common.EncodeType key_encode = 4;
// Secret used for format encode options.
map<string, secret.SecretRef> secret_refs = 5;

// ref connection for schema registry
optional uint32 connection_id = 6;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. You keep both connection_id and the resolved connection arguments in these message structures, right? It's acceptable to me but a bit counter-intuitive.

And to keep the design simple and align with the secret ref,

IIRC, secret ref doesn't keep the resolved plaintext secret at all. It always resolves whenever using a secret.

@fuyufjh fuyufjh changed the title feat: Connection for connector usage feat: Connection for Kafka source & sink Nov 26, 2024
Copy link
Contributor

@wcy-fdu wcy-fdu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Common part generally LGTM.

@@ -123,6 +126,8 @@ message Source {
uint32 associated_table_id = 12;
}
string definition = 13;

// ref connection for connector
optional uint32 connection_id = 14;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we just reuse Source.connection_id?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes and same for the schema registry part and sink.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants