The Node.js connector allows you to run JavaScript scripts using the Node.js runtime (version 20.4). Node.js connector might look very similar to JS mapper but is much more powerfull. Although Node.js ecosystem is very large, we provide only limited yet powerfull set of imported packages. For security reasons, your JavaScript code can only access limited set of imported types or packages and cannot use dynamic imports or require
statement.
Possible uses of the Multi input step include facilitating branch synchronization and enabling the acceptance of multiple inputs within integration processes. By incorporating this feature, you can streamline complex integration tasks and achieve better scalability, parallel processing, and system responsiveness. For more information, refer directly to the article dedicated to the Multi-Input Step
Provided packages
- fast-xml-parser (4.2.6) for parsing & building XML documents
- js-yaml (4.1.0) for parsing & building YAML documents
- jsonwebtoken (9.0.1) for JWT support
- lodash (4.17.15) routines for arrays & object manipulation
- node-fetch (3.3.1) HTTP client
- node:crypto
- @azure/identity (3.3.0) Azure authorization support
- @azure/storage-blob (12.16.0) Blob storage access
- @azure/openai (1.0.0-beta.6) Azure OpenAI
- @azure/arm-costmanagement (1.0.0-beta.1) Azure cost management
- @azure/arm-cognitiveservices (7.5.0) Cognitive services
- @azure/arm-botservice (4.0.0) Azure bot
- aws-sdk (2.1467.0) Amazon web services
- aws4 (1.12.0) AWS Signature library
- puppeteer (21.5.2) Library which provides a high-level API to control Chrome
- pdf-lib (1.17.1) Generate PDF documents
- ExcelJS (4.4.0) Read, manipulate and write spreadsheet data and styles to XLSX and JSON
Above packages may provide more types, but Integray exposes only limited set of types. All types are covered within Node.js processor examples.
In addition to above packages and types, runtime exposes following APIs
/* Connector logger API */
log {
trace: (message) => void,
debug: (message) => void,
info: (message) => void,
warn: (message) => void,
/* Calling this method will force the connector to mark statement execution as failed */
error: (message) => void,
}
/* wait specified amount of time */
sleep: (timeout) => Promise<void>
// for example sleep for 1 second
await sleep(1000);
Configuration
Node.js statement
JavaScript statement to be executed using the Node.js service.
The output of the connector expects an array of objects that must match the output schema in structure. The defined Statement
represents the body of the async
function whose output is parsed and returned as the output of the connector itself. The last line of the Statement
must therefore be:
return <array_of_objects>;
Example
return [
{ "hello": "Alice" }
];
Note
As the statement is async
you must ensure that all Promises are awaited. Non-awaited code may not complete before your statement code.
Accessing input data
If the input schema is selected, the input data will be predefined in the inputData
variable (see Predefined Variables). Input data is always in the form of an array, even if it contains a single record. Thus, individual records must be accessed via an index (e.g. inputData[0]
) or via any mapping or iteration function (map()
, foreach
, etc. - see examples).
Predefined variables
Variable | Data type | Description |
---|---|---|
inputData |
array | Input data processed by JSON.parse() into an array. |
outputData |
array | Output that can only be used when the script fails. |
DataCheckpoint |
any | Value of stored column on last row from last TaskRun . |
TaskRunID |
number | ID of the currently executing task run. |
EndpointTokenID |
bigint | When a task is executed via linked endpoint, variable contains ID of the token used for authorization (see table below). |
EndpointTokenName |
string | When a task is executed via linked endpoint, variable contains Name of the token used for authorization (see table below). |
localVariable |
string | Store data specific to a single task run. |
Authorization token | EndpointTokenID |
EndpointTokenName |
---|---|---|
None | null |
null |
Primary | 0 |
"Primary" |
Secondary | ID of the used seconday token |
Name of the used seconday token |
Data checkpoint column
The data checkpoint column is a column (field), from which the platform takes the last row value after each executed task run and stores it as a Data checkpoint. The data checkpoint value can be used in the JS statements to control, which data should be processed in the next run. You can refer to the value using the predefined variable DataCheckpoint. Example of use: processing data in cycles, where every cycle processes only a subset of the entire set due to the total size. If you use e.g. record ID as a data checkpoint column, the platform will store after each cycle the last processed ID from the data subset processed by the task run. If your statement is written in a way that will evaluate the value in data checkpoint against the IDs of the records in the data set, you can ensure this way, that only not processed records will be considered in the next task run.
Input & Output Schema
Input
Data schema is optional
The connector does not expect a specific schema. The required data structure can be achieved by correct configuration. Although the selected connector doesn't require a schema generally, the individual integration task step may need to match the output data structure of the preceding task step and use a data schema selected from the repository or create a new input schema.
Output
Data schema is mandatory
The connector requires mandatory input or output data schema, which must be selected by the user from the existing data schema repository or a new one must be created. The connector will fail without structured data.
Release notes
3.2.1.
- The primary input data is set to null if the input schema is not selected or the data is unavailable.
- Multi input feature included - more info here
3.1.2.
- Fixed processing sensitive errors
3.1.0
- First release