Batch deployment input details for AutoAI models
Follow these rules when you are specifying input details for batch deployments of AutoAI models.
Data type summary table:
Data | Description |
---|---|
Type | inline, data references |
File formats | CSV |
Data Sources
Input/output data references:
- Local/managed assets from the space
- Connected (remote) assets: Cloud Object Storage
Notes:
- For connections of type Cloud Object Storage , you must configure Access key and Secret key, also known as HMAC credentials.
- Your training data source can differ from your deployment data source, but the schema of the data must match or the deployment will fail. For example, you can train an experiment by using data from a Snowflake database and deploy by using input data from a Db2 database if the schema is an exact match.
- The environment variables parameter of deployment jobs is not applicable.
If you are specifying input/output data references programmatically:
- Data source reference
type
depends on the asset type. Refer to the Data source reference types section in Adding data assets to a deployment space. - For AutoAI assets, if the input or output data reference is of type
connection_asset
and the remote data source is a database thenlocation.table_name
andlocation.schema_name
are required parameters. For example:
"input_data_references": [{
"type": "connection_asset",
"connection": {
"id": <connection_guid>
},
"location": {
"table_name": <table name>,
"schema_name": <schema name>
<other wdp-properties supported by runtimes>
}
}]
Parent topic: Batch deployment input details by framework