Connector to an ODBC-compliant database that allows you to execute SQL statements and stored procedures using a configured ODBC driver. Use this connector to integrate with various database systems that support ODBC, including Microsoft SQL Server, MySQL, PostgreSQL, Oracle, and others. The connector runs on a remote agent and supports read and write operations across heterogeneous database environments.
Prerequisites
To connect through ODBC, the machine where the agent runs must have an ODBC driver manager and the appropriate database-specific ODBC driver installed. On Windows, the ODBC manager is included by default and drivers such as the Microsoft ODBC Driver for SQL Server can be downloaded from Microsoft’s official site. On Linux or macOS, the unixODBC manager is typically used together with the database driver. Each database provides its own ODBC driver (e.g., SQL Server, PostgreSQL, MySQL, Oracle, DB2), which must be installed to enable connectivity.
To connect to Microsoft SQL Server, you can download the Microsoft ODBC Driver for SQL Server from the official Microsoft website.
Make sure the driver is correctly configured on the machine where the agent runs.
Using placeholders
Assume the connector input data in the following format, followed by placeholders
, which can be used for direct reference to the input data table:
Row number | ID | Login |
---|---|---|
0 |
1 ${input[0].ID} |
Alice ${input[0].Login} |
1 |
2 ${input[1].ID} |
Bob ${input[1].Login} |
Predefined variables
Variable | Data type | Description |
---|---|---|
@TaskRunID |
text | ID of the currently executing TaskRun . Data type should be fixed in future. |
@EndpointTokenID |
integer | ID of the currently used EndpointToken . |
@EndpointTokenName |
text | Name of the currently used EndpointToken . |
@DataCheckpoint |
text | Datacheckpoint value. |
Returning output data
If the output scheme is selected, the result of the last select
statement, which is not assigned to any variable, is used as the step output. The structure of the query result must match the structure of the output schema.
Selected columns must be always wrapped with double quotes
and must respect case sensitive names
.
If there are some complex input schemas returned to output, the result will contain column data serialized as JSON string.
Example
SELECT 'Test' as Column1, 123 as Column2, GETDATE() as Column3
Logging
Direct logging from SQL connector into task run log is not supported.
Configuration
ODBC Connection String (ver. 1.0.0)
ODBC Connection string
Specifies the full ODBC connection string used to connect to your target database. Ensure the string includes all necessary parameters such as driver, server, database name, authentication details, and any driver-specific options required for your environment.
Example
Driver={ODBC Driver 18 for SQL Server};Server=localhost;Database=odbctest;UID=sa;PWD=S3cur3P@ssW0rd!;Encrypt=yes;TrustServerCertificate=yes;Connection Timeout=600;
ODBC Statement (ver. 1.0.0)
ODBC Statement
Defines the SQL statement or stored procedure to be executed on the IBM Db2 database.
You can use any valid SQL syntax supported by your target Db2 instance.
Placeholder | Type | Description |
---|---|---|
${input[<row number>].<column name>} |
value | Takes the value from <row number> th row (zero-based index) and column named <column name> . |
@TaskRunID |
integer | ID of the currently executing TaskRun . |
@EndpointTokenID |
integer | ID of the currently used EndpointToken . |
@EndpointTokenName |
text | Name of the currently used EndpointToken . |
@DataCheckpoint |
text | Datacheckpoint value. |
Data checkpoint column
The data checkpoint column is a column (field), from which the platform takes the last row value after each executed task run and stores it as a Data checkpoint. The data checkpoint value can be used in the SQL statements to control, which data should be processed in the next run. You can refer to the value using the predefined variable @DataCheckpoint. Example of use: processing data in cycles, where every cycle processes only a subset of the entire set due to the total size. If you use e.g. record ID as a data checkpoint column, the platform will store after each cycle the last processed ID from the data subset processed by the task run. If your statement is written in a way that will evaluate the value in data checkpoint against the IDs of the records in the data set, you can ensure this way, that only not processed records will be considered in the next task run.
Input & Output Schema
Input
Data schema is optional
The connector does not expect a specific schema. The required data structure can be achieved by correct configuration. Although the selected connector doesn't require a schema generally, the individual integration task step may need to match the output data structure of the preceding task step and use a data schema selected from the repository or create a new input schema.
Output
Data schema is optional
The connector does not expect a specific schema. The required data structure can be achieved by correct configuration. Although the selected connector doesn't require a schema generally, the individual integration task step may need to match the output data structure of the preceding task step and use a data schema selected from the repository or create a new input schema.
Release notes
3.0.4
- Initial version of ODBC Database connector.