Skip to content
background-image background-image

microsoft_sql_database

Microsoft SQL Database

  • Version

    3.5

  • Agent restriction

    None

  • Processing type

    Row by row & Bulk

  • Multi-input step

    Supports

The Microsoft SQL Database connector allows you to execute SQL commands and stored procedures remotely in Microsoft SQL database.

Possible uses of the Multi input step include facilitating branch synchronization and enabling the acceptance of multiple inputs within integration processes. By incorporating this feature, you can streamline complex integration tasks and achieve better scalability, parallel processing, and system responsiveness. For more information, refer directly to the article dedicated to the Multi-Input Step

Accessing input data

If the input schema is selected, the input data will be predefined in the temporary table #InputData (see Predefined Variables). You can use entire table in your statements or target concrete value by using placeholder ${input}:

Placeholder Type Description
${inputData} temporary table Temporary table filled with input data. This table will be deleted after the connector run is over.
${input[<row number>].<column name>} value Takes the value from <row number>th row (zero-based index) and column named <column name>.

Connector is able to process complex data columns on input schema. Such data are represented as NVARCHAR(MAX) column and filled with serialized JSON.

Using placeholders

Assume the connector input data in the following format, followed by placeholders, which can be used for direct reference to the input data table:

Row number ID Login
0 1 ${input[0].ID} Alice ${input[0].Login}
1 2 ${input[1].ID} Bob ${input[1].Login}

Returning all input data

select * from #InputData;

Returns the contents of the entire temporary table filled with input data:

ID Login
1 Alice
2 Bob

Referring to a specific value from the input data

select 
    ${input[0].ID} as ID, 
    Login as Login
from #InputData

Returns the list as in the previous example, except that the value from the first row and column of the ID input table is always used as ID (${input[0].ID} = 1):

ID Login
1 Alice
1 Bob

Predefined variables

Variable Data type Description
@TaskRunID bigint ID of the currently executing TaskRun.
#InputData temporary table Temporary table filled with input data. This table will be deleted when the connector is finished running.
@DataCheckpoint nvarchar(max) The last row value of the stored column (configured as Data check point) from the last TaskRun. The column configured as @DataCheckpoint must be part of the output.
@EndpointTokenID bigint When a task is executed via linked endpoint, variable contains ID of the token used for authorization (see table below).
@EndpointTokenName nvarchar(max) When a task is executed via linked endpoint, variable contains Name of the token used for authorization (see table below).
Authorization token @EndpointTokenID @EndpointTokenName
None null null
Primary 0 "Primary"
Secondary ID of the used seconday token Name of the used seconday token

Returning output data

If the output scheme is selected, the result of the last select statement, which is not assigned to any variable, is used as the step output. The structure of the query result must match the structure of the output schema.

If there are some complex input schemas returned to output, the result will contain column data serialized as JSON string.

Example

-- NOT the last select in the statement, data will NOT be sent to the output.
select 0 as ID, Login from #InputData;

-- IS the last valid select in the statement, data WILL be sent to the output.
select 1 as ID, Login from #InputData;

-- Is the last select, but its result is assigned to a variable. Therefore, the data will NOT be sent to the output.
declare @maxUserID int = (select max(ID) from DXTask);

Logging

Direct logging from SQL connector into task run log is not supported.


Configuration

Connection String configuration

Connection string

Input data can be obtained with input data placeholders.

${input.DbName} will be replaced by value in DbName of the very first row.

SQL Statement configuration

Statement

SQL statement to be executed. If the input schema is selected, the input data will be predefined in the temporary table #InputData. You can use entire table in your statements or target concrete value by using following placeholders:

Placeholder Type Description
${inputData} temporary table Temporary table filled with input data. This table will be deleted after the connector run is over.
${input[<row number>].<column name>} value Takes the value from <row number>th row (zero-based index) and column named <column name>.
@TaskRunID bigint ID of the currently executing TaskRun.

If the output scheme is selected, the result of the last select statement, which is not assigned to any variable, is used as the step output. The structure of the query result must match the structure of the output schema.

Data checkpoint column

The data checkpoint column is a column (field), from which the platform takes the last row value after each executed task run and stores it as a Data checkpoint. The data checkpoint value can be used in the SQL statements to control, which data should be processed in the next run. You can refer to the value using the predefined variable @DataCheckpoint. Example of use: processing data in cycles, where every cycle processes only a subset of the entire set due to the total size. If you use e.g. record ID as a data checkpoint column, the platform will store after each cycle the last processed ID from the data subset processed by the task run. If your statement is written in a way that will evaluate the value in data checkpoint against the IDs of the records in the data set, you can ensure this way, that only not processed records will be considered in the next task run.


Input & Output Schema

Input

Data schema is optional

The connector does not expect a specific schema. The required data structure can be achieved by correct configuration. Although the selected connector doesn't require a schema generally, the individual integration task step may need to match the output data structure of the preceding task step and use a data schema selected from the repository or create a new input schema.

Output

Data schema is optional

The connector does not expect a specific schema. The required data structure can be achieved by correct configuration. Although the selected connector doesn't require a schema generally, the individual integration task step may need to match the output data structure of the preceding task step and use a data schema selected from the repository or create a new input schema.


Release notes

3.5.1

  • Add support for multi input.

3.4.2

  • Connection and commands timeout is respecting task step timeout.

3.4.1

  • Fix processing sensitive errors.

3.4.0

  • Separate configurations from another supported DB systems. Fix Debug script and DataCheckpoint processing.

3.3.4

  • Updated package binaries because of changes in another included connector.

3.3.0

  • Change of the way how to refer to Data checkpoint. The predefined variable changed from @ into @DataCheckpoint.

3.2.5

  • Updated package binaries because of changes in another included connector.

3.2.0

  • Implemantation of generating debug script.

3.1.11

  • Updated package binaries because of changes in another included connector.

3.1.6

  • Updated package binaries because of changes in another included connector.

3.1.5

  • Fix replacing input placeholders in connection string configuration.

3.1.4

  • Fixed processing complex input schema.

3.1.3

  • Fixed shared nuget package versions.

3.1.2

  • Fixed right processing of nullable properties.