0 / 0
Running and saving pipelines

Running and saving pipelines

Learn how to run pipelines in real time or on a schedule, and how to preserve your pipeline versions.

You can do the following tasks:

Running a pipeline

You can run a pipeline in real time to test a flow as you work. When you are satisfied with a pipeline, you can then define a job to run a pipeline with parameters or to run on a schedule.

To run a pipeline:

  1. Click Run pipeline on the toolbar.
  2. Choose an option:
    • Trial run runs the pipeline without creating a job. Use this to test a pipeline.
    • Create a job presents you with an interface for configuring and scheduling a job to run the pipeline. You can save and reuse run details, such as pipeline parameters, for a version of your pipeline.
    • View history compares all of your runs over time.

You must make sure requirements are met when you run a pipeline. For example, you might need a deployment space or an API key to run some of your nodes before you can begin.

Setting a job run name

You can optionally specify a job run name when running a pipeline flow or a pipeline job and see the different job runs in the Job details dashboard. Otherwise, you can also assign a local parameter DSJobInvocationId to either a Run pipeline job node or Run DataStage job node (the latter is not available for watsonx).

If both the parameter DSJobInvocationId and job run name of the node are set, DSJobInvocationId will be used. If neither are set, the default value "job run" is used.

Creating a pipeline job

The following are all the configuration options for defining a job to run the pipeline.

  1. Name your pipeline job and choose a version.
  2. Input your IBM API key.
  3. (Optional) Schedule your job by toggling the Schedule button.
    1. Choose the start date and fine tune your schedule to repeat by any minute, hour, day, week, month.
    2. Add exception days to prevent the job from running on certain days.
    3. Add a time for terminating the job.
  4. (Optional) Select the parameter sets needed for your job, for example assigning a space to a deployment node. By default, your job runs using the parameter set added to the pipeline. This can be overridden by selecting another paramter set. To see how to create a pipeline parameter, see Defining pipeline parameters in Creating a pipeline.
  5. (Optional) Choose if you want to be notified of pipeline job status after running.

Viewing pipeline results

After you run a pipeline from a trial run or a job, you can view the status and other details of the run such as parameter results in the Run tracker.

Open each tab to view details of the pipeline run.

Tab Description
Node inspector Select a node first, and then click the node inspector to view the details of each node's run operation after running, such as logs, input and output.
Node output View the results of each node in one consolidated list. If the run fails, error messages and logs are provided to help you correct issues.
Run details If they are available, you can view a list of parameters that are the result of the pipeline run or multiple pipeline runs. If the pipeline is associated with DataStage jobs, they will also appear.

Notes on running a pipeline

  • Errors in the pipeline are flagged with an error badge. Open the node or condition with an error to change or complete the configuration.
  • View the consolidated logs to review operations or identify issues with the pipeline.

Saving a version of a pipeline

You can save a version of a pipeline and revert to it at a later time. For example, if you want to preserve a particular configuration before you make changes, save a version. You can revert the pipeline to a previous version. When you share a pipeline, the latest version is used.

To save a version:

  1. Click the Versions icon on the toolbar.
  2. In the Versions pane, click Save version to create a new version with a version number incremented by 1. You can save as many pipeline versions as you need with no limit.

When you run the pipeline, you can choose from available saved versions.

Note: You cannot delete a saved version.

Exporting pipeline assets for deployment spaces

Orchestration Pipelines does not support quick deployment or promotion of pipelines like other assets. Instead, you can export a project's or space's assets and import them into a deployment space. When you export, include pipelines in the list of assets that is exported to a zip file and then import into a project or space.

Importing a pipeline into a space extends your MLOps capabilities to run jobs for various assets from a space, or to move all jobs from a pre-production to a production space. Note these considerations for working with pipelines in a space:

  • Pipelines in a space are read-only. You cannot edit the pipeline. You must edit the pipeline from the project, then export the updated pipeline and import it into the space.
  • Although you cannot edit the pipeline in a space, you can create new jobs to run the pipeline. You can also use parameters to assign values for jobs so you can have different values for each job you configure.
  • If there is already a pipeline in the space with the same name, the pipeline import will fail.
  • If there is no pipeline in the space with the same name, a pipeline with version 1 is created in the space.
  • Any supporting assets or references required to run a pipeline job must also be part of the import package or the job will fail.
  • If your pipeline contains assets or tools not supported in a space, such as an SPSS modeler job, the pipeline job will fail.
  • You can automate export import with a CLI tool such as CPDCTL.
Attention: When you import a pipeline into a deployment space, dependencies cannot be identified and automatically deployed. You must ensure that all dependencies are deployed for the pipeline.

Viewing pipeline dependencies

View and manage your pipeline dependencies by clicking the more icon next to your pipeline assets. Select View relationships from the dropdown menu.

Downloading pipelines

You can now download a generic pipeline flow. You can also download a flow with DataStage related dependencies, and then they can use the DataStage upload functionality to upload and recreate the pipeline and its dependencies.

  1. Select Enable DataStage functions in Expression Builder and support Pipelines download option in your pipelines settings to enable the download pipeline button.

  2. Click Download flow and dependencies in your pipeline canvas toolbar to download the pipeline.

  3. Upload the zip file in your DataStage creation flow. You must do this in your watsonx.ai Studio project.

Parent topic: IBM Orchestration Pipelines

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more