About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Methods for tuning foundation models
Last updated: Jun 19, 2025
Learn more about different tuning methods.
Foundation models can be tuned in the following ways:
-
Prompt tuning: Adjusts the content of the prompt that is passed to the model to guide the model to generate output that matches a pattern you specify. The underlying foundation model and its parameter weights are not changed. Only the prompt input is altered.
Although the result of prompt tuning is a new tuned model asset, the prompt-tuned model merely adds a layer of function that runs before the input is processed by the underlying foundation model. When you prompt-tune a model, the underlying foundation model is not changed, which means that it can be used to address different business needs without being retrained each time. As a result, you reduce computational needs and inference costs. See Prompt tuning.
Prompt tuning is deprecated and will be removed in the future.
-
Full fine tuning: Using the base model’s previous knowledge as a starting point, full fine tuning tailors the model by tuning it with a smaller, task-specific dataset. The full fine-tuning method changes the parameter weights for a model whose weights were set through prior training to customize the model for a task.
Restriction: You currently cannot use full fine tuning with foundation models in watsonx.ai.
Parent topic: Tuning foundation models
Was the topic helpful?
0/1000