You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Targets (data type handling, batching, SQL object generation, tests, etc.)
Description
Currently an SQL target will open and hold a session to the target for each stream it is given. This means if a tap has 100 streams the SQL target will open 100 session and hold them until the meltano run is finished. It would be nice if SQL targets could utilize a single connector and leverage the SQLAlchemy pool of connection it has.
This can be accomplished If an instance of the target's connector class is initialized and held as a property and that connector property is passed on to SQLSink instances when they are initialized by the add_sink method. The property code might look like this:
class SQLTarget(Target):
"""Target implementation for SQL destinations."""
default_connector_class = SQLConnector
_target_connector: SQLConnector = None
@property
def target_connector(self) -> SQLConnector:
"""The connector object.
Returns:
The connector object.
"""
if self._target_connector is None:
self._target_connector = self.default_connector_class(dict(self.config))
return self._target_connector
The add_sink method would need to pass self.target_connector to the SQLSink when it is initialized. A couple of quick notes:
The SQLSink class has a connector parameter while the base Sink class does not.
The add_sink method is decorated with @final so it cannot be overloaded in the SQLTarget class.
The code could look something like this:
@final
def add_sink(
... comments and code removed to show the change better
sink = sink_class(
target=self,
stream_name=stream_name,
schema=schema,
key_properties=key_properties,
connector=self.target_connector,
)
... some code removed to assist with clarity
return sink
Feature scope
Targets (data type handling, batching, SQL object generation, tests, etc.)
Description
Currently an SQL target will open and hold a session to the target for each stream it is given. This means if a tap has 100 streams the SQL target will open 100 session and hold them until the
meltano run
is finished. It would be nice if SQL targets could utilize a single connector and leverage the SQLAlchemy pool of connection it has.This can be accomplished If an instance of the target's connector class is initialized and held as a property and that connector property is passed on to
SQLSink
instances when they are initialized by theadd_sink
method. The property code might look like this:The
add_sink
method would need to passself.target_connector
to theSQLSink
when it is initialized. A couple of quick notes:connector
parameter while the baseSink
class does not.add_sink
method is decorated with@final
so it cannot be overloaded in the SQLTarget class.The code could look something like this:
An example of this working in a target can be found in target-mssql--buzzcutnorman:target.py and target-postgress--buzzcutnorman:target.py
The text was updated successfully, but these errors were encountered: