Optimize ML Models and Deploy Human-in-the-Loop Pipelines
3 Projects – Description:
When training ML models, hyperparameter tuning is a step taken to find the best performing training model. In this lab you will apply a random algorithm of Automated Hyperparameter Tuning to train a BERT-based natural language processing (NLP) classifier. The model analyzes customer feedback and classifies the messages into positive (1), neutral (0), and negative (-1) sentiments.Create an endpoint with multiple variants, splitting the traffic between them. Then after testing and reviewing the endpoint performance metrics, you will shift the traffic to one variant and configure it to autoscale.Create your own human workforce, a human task UI, and then define the human review workflow to perform data labeling. You will make the original predictions of the labels with the custom ML model, and then create a human loop if the probability scores are lower than the preset threshold. After the completion of the human loop tasks, you will review the results and prepare data for re-training.
My Solutions: Practical Data Science Projects from Coursera, DeepLearning.AI and Amazon Web Services
ML Pipeline using Amazon Sagemaker

Project 1:
Feature transformation with Amazon SageMaker processing job and Feature Store
When training ML models, hyperparameter tuning is a step taken to find the best performing training model. In this lab you will apply a random algorithm of Automated Hyperparameter Tuning to train a BERT-based natural language processing (NLP) classifier. The model analyzes customer feedback and classifies the messages into positive (1), neutral (0), and negative (-1) sentiments.
Steps
1. Configure dataset
2. Configure and run hyper-parameter tuning job
3. Evaluate the results
Project 2:
A/B testing, traffic shifting and autoscaling
Create an endpoint with multiple variants, splitting the traffic between them. Then after testing and reviewing the endpoint performance metrics, you will shift the traffic to one variant and configure it to autoscale.
Steps
- Configure and create REST Enpoint with multiple variants
2. Test the model
3. Show the metrics for each variant
4. Shift all traffic to one variant
5. Configure one variant to autoscale
Project 3:
Data labeling and human-in-the-loop pipelines with Amazon Augmented AI (A2I)
Create your own human workforce, a human task UI, and then define the human review workflow to perform data labeling. You will make the original predictions of the labels with the custom ML model, and then create a human loop if the probability scores are lower than the preset threshold. After the completion of the human loop tasks, you will review the results and prepare data for re-training.
Steps
1. Setup private workforce and Cognito pool
2. Create the Human Task UI using a Worker Task Template
3. Create a Flow Definition
5. Start and check the status of human loop
6. Verify the completion
7. View the labels and prepare data for training