Download Google Professional Machine Learning Engineer.Professional-Machine-Learning-Engineer.VCEplus.2024-09-17.106q.vcex

Vendor: Google
Exam Code: Professional-Machine-Learning-Engineer
Exam Name: Google Professional Machine Learning Engineer
Date: Sep 17, 2024
File Size: 946 KB
Downloads: 1

How to open VCEX files?

Files with VCEX extension can be opened by ProfExam Simulator.

Demo Questions

Question 1
Your data science team needs to rapidly experiment with various features, model architectures, and hyperparameters. They need to track the accuracy metrics for various experiments and use an API to query the metrics over time. What should they use to track and report their experiments while minimizing manual effort?
  1. Use Kubeflow Pipelines to execute the experiments Export the metrics file, and query the results using the Kubeflow Pipelines API.
  2. Use Al Platform Training to execute the experiments Write the accuracy metrics to BigQuery, and query the results using the BigQueryAPI.
  3. Use Al Platform Training to execute the experiments Write the accuracy metrics to Cloud Monitoring, and query the results using the Monitoring API.
  4. Use Al Platform Notebooks to execute the experiments. Collect the results in a shared Google Sheets file, and query the results using the Google Sheets API
Correct answer: A
Explanation:
https://codelabs.developers.google.com/codelabs/cloud-kubeflow-pipelines-gis Kubeflow Pipelines (KFP) helps solve these issues by providing a way to deploy robust, repeatable machine learning pipelines along with monitoring, auditing, version tracking, and reproducibility. Cloud AI Pipelines makes it easy to set up a KFP installation.https://www.kubeflow.org/docs/components/pipelines/introduction/#what-is-kubeflow-pipelines'Kubeflow Pipelines supports the export of scalar metrics. You can write a list of metrics to a local file to describe the performance of the model. The pipeline agent uploads the local file as your run-time metrics. You can view the uploaded metrics as a visualization in the Runs page for a particular experiment in the Kubeflow Pipelines UI.' https://www.kubeflow.org/docs/components/pipelines/sdk/pipelines-metrics/
https://codelabs.developers.google.com/codelabs/cloud-kubeflow-pipelines-gis Kubeflow Pipelines (KFP) helps solve these issues by providing a way to deploy robust, repeatable machine learning pipelines along with monitoring, auditing, version tracking, and reproducibility. Cloud AI Pipelines makes it easy to set up a KFP installation.
https://www.kubeflow.org/docs/components/pipelines/introduction/#what-is-kubeflow-pipelines
'Kubeflow Pipelines supports the export of scalar metrics. You can write a list of metrics to a local file to describe the performance of the model. The pipeline agent uploads the local file as your run-time metrics. You can view the uploaded metrics as a visualization in the Runs page for a particular experiment in the Kubeflow Pipelines UI.' https://www.kubeflow.org/docs/components/pipelines/sdk/pipelines-metrics/
Question 2
You are developing an ML model intended to classify whether X-Ray images indicate bone fracture risk. You have trained on Api Resnet architecture on Vertex AI using a TPU as an accelerator, however you are unsatisfied with the trainning time and use memory usage. You want to quickly iterate your training code but make minimal changes to the code. You also want to minimize impact on the models accuracy. What should you do?
  1. Configure your model to use bfloat16 instead float32
  2. Reduce the global batch size from 1024 to 256
  3. Reduce the number of layers in the model architecture
  4. Reduce the dimensions of the images used un the model
Correct answer: B
Question 3
Your task is classify if a company logo is present on an image. You found out that 96% of a data does not include a logo. You are dealing with data imbalance problem. Which metric do you use to evaluate to model?
  1. F1 Score
  2. RMSE
  3. F Score with higher precision weighting than recall
  4. F Score with higher recall weighted than precision
Correct answer: D
Question 4
You need to train a regression model based on a dataset containing 50,000 records that is stored in BigQuery. The data includes a total of 20 categorical and numerical features with a target variable that can include negative values. You need to minimize effort and training time while maximizing model performance. What approach should you take to train this regression model?
  1. Create a custom TensorFlow DNN model.
  2. Use BQML XGBoost regression to train the model
  3. Use AutoML Tables to train the model without early stopping.
  4. Use AutoML Tables to train the model with RMSLE as the optimization objective
Correct answer: B
Explanation:
https://cloud.google.com/bigquery-ml/docs/introduction
https://cloud.google.com/bigquery-ml/docs/introduction
Question 5
Your data science team has requested a system that supports scheduled model retraining, Docker containers, and a service that supports autoscaling and monitoring for online prediction requests. Which platform components should you choose for this system?
  1. Vertex AI Pipelines and App Engine
  2. Vertex AI Pipelines and Al Platform Prediction
  3. Cloud Composer, BigQuery ML , and Al Platform Prediction
  4. Cloud Composer, Al Platform Training with custom containers , and App Engine
Correct answer: B
Question 6
While monitoring your model training's GPU utilization, you discover that you have a native synchronous implementation. The training data is split into multiple files. You want to reduce the execution time of your input pipeline. What should you do?
  1. Increase the CPU load
  2. Add caching to the pipeline
  3. Increase the network bandwidth
  4. Add parallel interleave to the pipeline
Correct answer: A
Question 7
Your data science team is training a PyTorch model for image classification based on a pre-trained RestNet model. You need to perform hyperparameter tuning to optimize for several parameters. What should you do?
  1. Convert the model to a Keras model, and run a Keras Tuner job.
  2. Run a hyperparameter tuning job on AI Platform using custom containers.
  3. Create a Kuberflow Pipelines instance, and run a hyperparameter tuning job on Katib.
  4. Convert the model to a TensorFlow model, and run a hyperparameter tuning job on AI Platform.
Correct answer: C
Question 8
You have a large corpus of written support cases that can be classified into 3 separate categories: Technical Support, Billing Support, or Other Issues. You need to quickly build, test, and deploy a service that will automatically classify future written requests into one of the categories. How should you configure the pipeline?
  1. Use the Cloud Natural Language API to obtain metadata to classify the incoming cases.
  2. Use AutoML Natural Language to build and test a classifier. Deploy the model as a REST API.
  3. Use BigQuery ML to build and test a logistic regression model to classify incoming requests. Use BigQuery ML to perform inference.
  4. Create a TensorFlow model using Google's BERT pre-trained model. Build and test a classifier, and deploy the model using Vertex AI.
Correct answer: B
Explanation:
AutoML Natural Language is a service that allows you to quickly build, test and deploy natural language processing (NLP) models without needing to have expertise in NLP or machine learning. You can use it to train a classifier on your corpus of written support cases, and then use the AutoML API to perform classification on new requests. Once the model is trained, it can be deployed as a REST API. This allows the classifier to be integrated into your pipeline and be easily consumed by other systems.
AutoML Natural Language is a service that allows you to quickly build, test and deploy natural language processing (NLP) models without needing to have expertise in NLP or machine learning. You can use it to train a classifier on your corpus of written support cases, and then use the AutoML API to perform classification on new requests. Once the model is trained, it can be deployed as a REST API. This allows the classifier to be integrated into your pipeline and be easily consumed by other systems.
Question 9
You need to quickly build and train a model to predict the sentiment of customer reviews with custom categories without writing code. You do not have enough data to train a model from scratch. The resulting model should have high predictive performance. Which service should you use?
  1. AutoML Natural Language
  2. Cloud Natural Language API
  3. AI Hub pre-made Jupyter Notebooks
  4. AI Platform Training built-in algorithms
Correct answer: A
Question 10
You need to build an ML model for a social media application to predict whether a user's submitted profile photo meets the requirements. The application will inform the user if the picture meets the requirements. How should you build a model to ensure that the application does not falsely accept a non-compliant picture?
  1. Use AutoML to optimize the model's recall in order to minimize false negatives.
  2. Use AutoML to optimize the model's F1 score in order to balance the accuracy of false positives and false negatives.
  3. Use Vertex AI Workbench user-managed notebooks to build a custom model that has three times as many examples of pictures that meet the profile photo requirements.
  4. Use Vertex AI Workbench user-managed notebooks to build a custom model that has three times as many examples of pictures that do not meet the profile photo requirements.
Correct answer: C
HOW TO OPEN VCE FILES

Use VCE Exam Simulator to open VCE files
Avanaset

HOW TO OPEN VCEX AND EXAM FILES

Use ProfExam Simulator to open VCEX and EXAM files
ProfExam Screen

ProfExam
ProfExam at a 20% markdown

You have the opportunity to purchase ProfExam at a 20% reduced price

Get Now!