Once a predictive model is built, tested and validated, you can easily deploy and use any machine learning model using MaaS_ML as a REST Web Service on a physical or a virtual compute host on which there is an available Proactive Node. This will be particularly useful for engineering or business teams that want to take advantage of this model. The life cycle of any MaaS_ML instance (i.e., from starting the generic service instance, deploying a machine learning model to pausing or deleting the instance) can be managed in three different ways in ProActive Machine Learning PML (PML is the new name for MLOS) that will be described in this tutorial.

In this tutorial, you will get introduced to managing MaaS_ML instances in PML via:

  1. The Studio Portal and more specifically the bucket model-as-a-service where specific generic tasks are provided to process all the possible actions (i.e. Start_MaaS_ML_Service, Deploy_ML_Model, Call_Predicition_Service, MaaS_ML_Action).
  2. The Service Automation Portal by executing the different actions associated to MaaS_ML (i.e. Deploy_ML_Model, Update_MaaS_ML, Pause_MaaS_ML, Resume_MaaS_ML, Finish).
  3. The Swagger UI which is accessible once the PSA service is up and running.

1 Management of a MaaS_ML Instance Using the Studio Portal

Using the Studio Portal of ProActive, we are able to manage the life cycle of a MaaS_ML instance i.e. starting a generic service instance, deploying a ML model to pausing or deleting the instance.

  1. Open your workflow in ProActive Workflow Studio home page.
  2. Click in the View menu field.
  3. Click on Add Bucket Menu to the Palette.
  4. Choose model_as_a_service.
  5. Once the Model As A Service bucket appears, click on it then drag and drop the IRIS_Deploy_Predict_Flower_Classifier_Model found under MaaS_ML Examples. This workflow contains a pre-built model trainer, that starts from loading the IRIS dataset for flowers, splitting it into training and testing datasets and training the model using one of the classification techniques (in this case: Support Vector Machines). The workflow also includes 4 MaaS_ML tasks representing its life cycle: Start_MaaS_ML_Service, Deploy_ML_Model, Call_Prediction_Service, MaaS_ML_Action. For more information about the characteristics and features of the MaaS_ML instance tasks and their variables, please check the MaaS_ML (Via Studio Portal) documentation web page.
  6. Click on the Workflow Variables to check the different variables characterizing the overall workflow.
  7. Click on one of the tasks and then click on Task Variables to check the different variables characterizing the chosen task.
  8. Click on the Execute button.
  9. Click on the Secheduling & Orchestration portal.
  10. You can monitor the execution of the workflow and check the output of each task by clicking on the Output tab below. Click on the Call_Predicition_Service task and then on Open in browser in the Task Preview tab to preview the obtained predictions.

2 Management of a MaaS_ML Instance Using the Service Automation Portal

MaaS_ML instance lifecycle can be also managed using the Service Automation portal by following the steps below:

  1. Open ProActive Service Automation home page.
  2. In the Service Activation tab under Services Workflows, search for MaaS_ML and click on it.
  3. A window with several variables will appear. In order to run the service, you need to set some variables as follows:
    For example:
    • INSTANCE_NAME: In this variable, you provide a name for the instance to be launched
    • DRIFT_ENABLED: If True, any drift in the data will be detected and the user will be notified.
    For more information about the variables, please visit the MaaS_ML (Via Service Automation Portal) documentation web page.
    Click on the Execute Action button to start the service.
  4. The started MaaS_ML instance will appear in the Activated Services with a Current State as Running.
  5. Under Actions, you will find a drop list of actions that can be applied on the running MaaS_ML instance.
  6. Click on Deploy_ML_Model action.
  7. A window with different variables will appear. In order to deploy a trained model, set the following variables as follows:
    For example:
    • MODEL_URL: https://activeeon-public.s3.eu-west-2.amazonaws.com/models/model.pkl. This is the URL where the trained model can be found.
    • USER_NAME: user. A valid username should be provided in order to obtain a token that enables the deployment of the model.
    For more information about the variables, please visit the MaaS_ML (Deploy a Specific ML Model) documentation web page. Click on the Execute Action button to start the service.
  8. Once the MaaS_ML instance is succesfully deployed, click on maas_ml-gui to view the Audit & Traceability page. In this page, you can check the different variables of the instance and examine its traceability throughout different date/times(s).
  9. Click on the link Click here to visualize the predictions above, to visualize the model predictions. This link only appears in case you have chosen True for the LOGGING_PREDICTION variable before the execution (which is the default value).
  10. In the drop list of Actions, there is Update_MaaS_ML action that will update the deployed instance according to the updated variables. There is also the Pause_MaaS_ML action which will pause the service instance. In addition, there is the Finish action which will finish and delete the service instance.

3 Management of a MaaS_ML Instance Using the Swagger UI

Once the MaaS_ML service is launched and running using in Service Automation Portal, click on the maas_ml-gui under Endpoint. In the Audit & Traceability page, click on the link provided in the top of the page to access the Swagger UI. Using the Swagger UI, a user is able to deploy a machine learning model as a service. When the ML model is deployed as a service, it can be called to apply some predictions for input datasets.

  1. Open the Swagger home page by clicking on the second link on the top of the page.
  2. In the Swagger UI, you can find several endpoints to manage a MaaS_ML instance.
  3. Start by clicking on the /get_token endpoint to obtain a token for your service.
  4. Click on the Try it out! button. The token ID will appear in the Response Body section. Copy this token ID.
  5. Click on the /deploy endpoint, choose you machine learning model file to be uploaded in model_file. If you have chosen DRIFT_ENABLED when starting the MaaS_ML instance, you should upload a baseline dataset in baseline_data containing a part of the data on which the model was trained. This will be used in the data drift detection (DDD) process. For more information about the variables, please visit the MaaS_ML (Data Drift Detection) documentation web page. Paste the copied token ID in api_token and click on Try it out! to deploy the model.
  6. If your model is already deployed using the Service Automation Portal, go to the /predict endpoint.
  7. Click on the Example Value section.
  8. The information will appear in the data section. Paste the token ID in the api_token. Here you can choose the data drift detector you want to use. In the provided example HDDM is used. You can also try Page Hinkley and ADWIN. Click on Try it Out!
  9. The predictions and the drifts (if they exist) will appear in the Response Body section.
  10. There are several actions that can be applied using the Swagger such as listing all the deployed models, redeploying a specific model, undeploying a model, etc...
  11. For more information about the Swagger UI, please visit the MaaS_ML (Via Swagger UI) documentation web page.
When done with this tutorial, you can move on to: