0 / 0
Deploying Natural Language Processing models

Deploying Natural Language Processing models

You can deploy a Natural Language Processing model in watsonx.ai Runtime by using Python functions or Python scripts. Both online and batch deployments are supported.

  • You can deploy a Python function for online and inline batch deployments and a Python script for batch data reference deployments.
  • The prediction output that is returned from <model>.run() is an object of a class specific to the concerned data model's prediction class (for example, watson_nlp.data_model.syntax.SyntaxPrediction). Such objects cannot be serialized into JSON format so the prediction output must be converted to either the Python dictionary type or JSON by using the <prediction output>.to_dict() (recommended) or <prediction output>.to_json() methods. If you don't convert the output, scoring API requests return an error. See Example of handling prediction output of an NLP model.
  • You can access the location of pre-trained Watson NLP models in the Python function code by using the LOAD_PATH environment variable.
  • Prediction input payload and prediction response that is returned from score() must meet the requirements that are listed in online scoring and jobs API documentation.
  • Scoring requests for NLP models might fail with an Out of Memory error that is reported by the underlying JVM runtime. If an error is reported, patch the deployment to use a hardware specification with more available memory.

Prerequisites

You must set up your task credentials by generating an API key. For more information, see Managing task credentials.

Usage examples

Running syntax analysis on a text snippet

import watson_nlp

# Load the syntax model for English
syntax_model = watson_nlp.load('syntax_izumo_en_stock')

# Run the syntax model and print the result
syntax_prediction = syntax_model.run('Welcome to IBM!')
print(syntax_prediction)

Extracting entities from a text snippet

import watson_nlp
entities_workflow = watson_nlp.load('entity-mentions_transformer-workflow_multilingual_slate.153m.distilled')
entities = entities_workflow.run('IBM\'s CEO Arvind Krishna is based in the US', language_code="en")
print(entities.get_mention_pairs())

Example of handling prediction output of an NLP model

    for input_data in input_data_row:
        targets_sentiments = targets_sentiment_model.run(input_data)
        scoring_prediction_row.append(targets_sentiments.to_dict())

Supported Software Specifications

List of software specifications that support the deployment of NLP models:

  • runtime-23.1-py3.10
  • Custom software specifications based on runtime-23.1-py3.10
  • runtime-24.1-py3.11
  • Custom software specifications based on runtime-24.1-py3.11

For information on how to customize software specifications, see Customizing watsonx.ai Runtime deployment runtimes.

NLP model deployment examples

For examples, refer to this Jupyter Notebook:

Parent topic: Managing predictive deployments

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more