Skip to content
background-image background-image

JS Mapper

[ | version 3.3]

Connector

The connector is permitted for use of the platform background agent.

Connector processing type: Both (Row by row & Bulk), Default type: Row by row!

Debug script enabled.

The JS Mapper connector allows you to run JavaScript scripts using the Jint interpreter (version 3.0 beta 2037). For a full list of supported commands and functions of this interpreter, see here. The beta version of the Jint 3.x interpreter is preferred over version 2.x, at the recommendation of the authors, because of its enhanced functionality and improved performance.

Possible uses of the Multi input step include facilitating branch synchronization and enabling the acceptance of multiple inputs within integration processes. By incorporating this feature, you can streamline complex integration tasks and achieve better scalability, parallel processing, and system responsiveness. For more information, refer directly to the article dedicated to the Multi-Input Step

Accessing input data

If the input schema is selected, the input data will be predefined in the inputData variable (see Predefined Variables). Input data is always in the form of an array, even if it contains a single record. Thus, individual records must be accessed via an index (e.g. inputData[0]) or via any mapping or iteration function (map(), foreach, etc. - see examples).

Predefined variables

Variable Data type Description
inputData array Input data processed by JSON.parse() into an array.
outputData array Output that can only be used when the script fails.
DataCheckpoint any Value of stored column on last row from last TaskRun.
TaskRunID number ID of the currently executing task run.
EndpointTokenID bigint When a task is executed via linked endpoint, variable contains ID of the token used for authorization (see table below).
EndpointTokenName string When a task is executed via linked endpoint, variable contains Name of the token used for authorization (see table below).
Authorization token EndpointTokenID EndpointTokenName
None null null
Primary 0 "Primary"
Secondary ID of the used secondary token Name of the used secondary token

Returning output data

The output of the connector expects an array of objects that must match the output schema in structure. The defined Statement represents the body of the function whose output is parsed and returned as the output of the connector itself. The last line of the Statement (see Configuration) must therefore be:

return <array>;

Example

If an output schema is defined:

Column Data type
ID integer
Login string

and you want to return two records to the output, use the following code:

return [                    // always returns an array of objects
    {                       // record 1
        "ID": 1,            // value of column ID
        "Login": "Alice"    // value for the Login column
    },
    {                       // record 2
        "ID": 2,            // value for column ID
        "Login": "Bob"      // value for the Login column
    }
];

Output data must always be in the form of an array, even if it contains a single record

// this will work
return [                    
    {                       
        "ID": 1,            
        "login": "Alice"    
    }
];

// this will not work!
return {                
    "ID": 1,            
    "login": "Alice"
};

Be careful when formatting your code

JavaScript allows you to write single statements without trailing semicolons and separates them by single lines. For example, the following code returns nothing at all instead of the requested record.

// this code returns an empty result!
return  // a statement terminated by a newline, simply returns nothing
[       // unreachable code
    {                       
        "ID": 1,            
        "login": "Alice"    
    }
];

Returning data before script failure

JS Mapper allows you to return data to the output even if the script fails. This can be done by assigning a value to the outputData variable. Its contents are then used as output only if the script fails.

// data passed as output of the step
let outputData = [ { "ID": 1, "Login": "Alice" }, { "ID": 2, "Login": "Bob" } ];

// throws a wanted or random exception for script failure
throw ''An error occurred''; 

// this code will not be executed
return ...

The script must always include a return statement on the last line

Even if you use an assignment to an output variable. If the script fails, the output defined by the return statement is used in preference.

Logging

From the JS Mapper connector, you can directly log to the run log task. It has a prebuilt log class with the following methods for this purpose:

// TRACE
log.trace("Your trace message!");

// DEBUG
log.debug("Your debug message!")

// INFO
log.info("Your information message!")

// WARNING
log.warn("Your warning message!");

// ERROR - aborts script execution!
log.error("Your error message!");

Arguments passed to logging methods must always be of type string. If you want to log the whole object, use the JSON.stringify() serialization first and then pass the resulting string as the method argument:

let arr = [ { "ID": 1, "Login": "Alice" }, { "ID": 2, "Login": "Bob" } ];

log.info(arr);                  // will be logged as System.Object[]
log.info(JSON.stringify(arr));  // will be logged as [{"ID":1, "Login": "Alice"},{"ID":2, "Login": "Bob"}].

return [];

Keep in mind that if you log a message using the log.error() method

The script execution will immediately abort and the step will end with a Failed Connector status.

log.error("Logging error!");   

// exit...

log.info("Logging message!"); // <== this code will not execute.

return [];

Configuration

JS Mapping Statement configuration

Statement

JavaScript statement to be executed using the Jint interpreter. For a full list of supported commands and functions of this interpreter, see documentation.

The output of the connector expects an array of objects that must match the output schema in structure. The defined Statement represents the body of the function whose output is parsed and returned as the output of the connector itself. The last line of the Statement must therefore be:

return <array_of_objects>;

Example

return [
    { "hello": "Alice" }
];

Data checkpoint column

The data checkpoint column is a column (field), from which the platform takes the last row value after each executed task run and stores it as a Data checkpoint. The data checkpoint value can be used in the JS statements to control, which data should be processed in the next run. You can refer to the value using the predefined variable DataCheckpoint. Example of use: processing data in cycles, where every cycle processes only a subset of the entire set due to the total size. If you use e.g. record ID as a data checkpoint column, the platform will store after each cycle the last processed ID from the data subset processed by the task run. If your statement is written in a way that will evaluate the value in data checkpoint against the IDs of the records in the data set, you can ensure this way, that only not processed records will be considered in the next task run.

Release notes

3.3.7

  • The primary input data is set to null if the input schema is not selected or the data is unavailable.

3.3.5

  • Multi input feature included - more info here

3.2.4

  • Fix configuration properties names.

3.2.0

  • Implemantation of debug script generating.
  • Fixed processing return clause on last line.
  • Fixed logging sensitive errors.

3.1.2

  • Plugin binaries update as a result of included connector change.

3.1.1

  • Fix processing stored values from Storage data

3.1.0

  • Enable of capturing the data processed in previous successful task run.

3.0.7

  • Plugin binaries update as a result of included connector change.

3.0.6

  • Plugin binaries update as a result of included connector change.

3.0.5

  • Plugin binaries update as a result of included connector change.

3.0.4

  • Fixed logging null values.

3.0.3

  • Fixed shared nuget package versions.

3.0.2

  • Fixed right processing of nullable properties.