Migrating DataStage jobs
You can migrate DataStage jobs by creating and importing ISX files that contain the job information. Complete other post-migration tasks where applicable.
Procedure
Create and import the ISX file
Create and export an ISX file by using one of the methods that are listed in the following table:
Option | Instructions |
---|---|
ISTOOL | Use ISTOOL to create an ISX file and export the file. For instructions, see Export command for InfoSphere DataStage and QualityStage assets and How to use ISTOOL for EXPORT IMPORT Information Server Components. |
MettleCI | Use MettleCI, which is a third-party service, to convert a server job design into an equivalent parallel job design, then create an ISX file and export the file to your system. For more information, see MettleCLI docs. |
InfoSphere Information Server Manager GUI client | Use the Information Server Manager GUI client to export the ISX file. For detailed instructions, see Exporting assets |
- Open an existing project or create a new one.
- From the Assets tab of the project, click .
- Click the Local file tab, then upload the ISX file from your local
computer. Then, click Create.Note: The ISX file must exist on your desktop or network drive. Do not drag the file as an attachment from another application.
The asset import report contains status information and error messages that you can use to troubleshoot your ISX import. For information on viewing and using the report to troubleshoot, see Asset import report (DataStage).
Migrate connections
If your migrated jobs contain connections, see Migrating connections in DataStage for information.
Migrate stages
Stages | Considerations |
---|---|
Stored procedure | Stored procedures are migrated to the corresponding platform connector. All stored procedures on Db2® type connectors are migrated to the standard Db2 connector, including stored procedures for connectors like Db2 for i and Db2 for z/OS®. Manually replace the Db2 connector with the correct connector type and copy over the stored procedure call. If input and output parameters cannot be detected in a stored procedure, it's left as-is and must be updated after migration to match the new syntax. For more information, see Using stored procedures in DataStage. |
Review the parameter sets and PROJDEF values
Review your parameter sets and verify that their default values are correct after migration.
PROJDEF parameter sets are created and updated by migration. If you migrate a job with a PROJDEF parameter set, review the PROJDEF parameter set and specify default values for it. Then, within flows and job runs, any parameter value that is $PROJDEF uses the value from the PROJDEF parameter set.
If PROJDEF parameter values have been defined in the DSParams file,
use the cpdctl dsjob create-dsparams
command to transfer those values into your
project's runtime environment. For more information, see DSParams.
Update scripts that use the dsjob command line interface
- Download cpdctl: https://github.com/IBM/cpdctl/releases/
- Create a source shell script (source.sh) to configure cpdctl. Create a text file
key.txt
for your encryption key. See the following example:#!/bin/bash export CPDCTL_ENCRYPTION_KEY_PATH=~/key.txt export DSJOB_URL=https://example.com export DSJOB_ZEN_URL=https://example.com export CPDCTL_ENABLE_DSJOB=true export CPDCTL_ENABLE_DATASTAGE=true export DSJOB_USER=admin export DSJOB_PWD=<Password> cpdctl config user set dscpserver-user --username $DSJOB_USER --password $DSJOB_PWD cpdctl config profile set dscpserver-profile --url $DSJOB_URL cpdctl config context set dscpserver-context --user dscpserver-user --profile dscpserver-profile cpdctl config context use dscpserver-context cpdctl dsjob list-projects
Change any references to
dsjob
tocpdctl dsjob
. You might need to adjust the command-line options to fit the DataStage command-line style. See DataStage command-line tools.
Migrate sequence jobs
You can import an ISX file to migrate a sequence job to a pipeline flow. Rewrite expressions in CEL and manually reselect values for some pipeline nodes. See the following topics for more considerations: Run flows in sequence with Orchestration Pipelines and Migrating and constructing pipeline flows for DataStage. See Migrating BASIC routines in DataStage for information on rewriting BASIC routines as scripts.
Rewrite the routine code for before-job and after-job subroutines
When you migrate before-job and after-job subroutines, the routine code is stored in a .sh script under /ds-storage/projects/<projectName>/scripts/DSU.<RoutineName>.sh. Rewrite the routine code in the same way as a BASIC routine, following the steps in Migrating BASIC routines in DataStage to retrieve the output arguments, but include an exit statement for the before/after-job subroutine. See the following example:# TODO: Update the following json string and print it as the last line of the standard output.
ErrorCode=0
echo "{\"ErrorCode\":\"$ErrorCode\"}"
exit $ErrorCode