AI Governance is a set of tools and capabilities for managing your AI assets in accordance with your organization's regulations and requirements. Use AI Governance to track machine-learning models from request to production and evaluate models to meet thresholds for fairness and accuracy. Trustworthy AI requires strong governance.
The core of the AI Governance solution is AI Factsheets. Use AI Factsheets to track AI models from request to production. In previous releases, AI Factsheets was a part of Watson Knowledge Catalog. Now, you can install AI Factsheets as an independent service with either Watson Studio or Watson Knowledge Catalog. In addition, you can choose your installation option based on how you plan to implement governance.
You can set up AI governance in an iterative manner, scaling as needed. Start with a simple implementation. Then you can further customize your AI Governance to track better and govern your models as your needs evolve.
Recommended services for AI governance
You can build your AI Governance strategy with either Watson Knowledge Catalog or Watson Studio and AI Factsheets, but for the full AI Governance solution, use these services:
- AI Factsheets
- Watson Knowledge Catalog
- Watson Studio
- Watson Machine Learning
- Watson OpenScale
Simplest implementation of AI governance
For the most straightforward implementation of AI governance, you can use a Watson Knowledge Catalog to track and inventory models. A model use case in a catalog consists of a set of AI factsheets containing lineage, history, and other relevant information about a model's lifecycle. A Watson Knowledge Catalog administrator must create a catalog and add data scientists, data engineers, and other users as collaborators.
Catalog collaborators can request and track models:
- Business users add model use cases to the model inventory view of the catalog to request machine-learning models.
- Data scientists associate their trained models with model use cases to create AI factsheets.
AI factsheets accumulate information about the model in the following ways:
- All actions associated with the tracked model are automatically saved, including deployments and evaluations.
- All changes to input data assets are automatically saved.
- Data scientists can add tags, business terms, supporting documentation, and other information.
- Data scientists can associate challenger models with the model use case to compare model performance.
Validators and other stakeholders review AI factsheets to ensure compliance and certify model progress from development to production. They can also generate reports from the factsheets to print, share, or archive details.
Customization options for AI governance
You can add these custom options to your AI governance implementation at any time:
- MLOps engineers can extend model tracking to include external models that are created with third-party machine learning models.
- MLOps engineers can add custom properties to factsheets to track more information.
- Compliance analysts can customize the default report templates to generate tailored reports for the organization.
- Record the results of IBM Watson OpenScale evaluations for fairness and other metrics as part of model tracking.