Download Professional Cloud Database Engineer.Professional-Cloud-Database-Engineer.VCEplus.2024-01-14.30q.vcex

Vendor: Google
Exam Code: Professional-Cloud-Database-Engineer
Exam Name: Professional Cloud Database Engineer
Date: Jan 14, 2024
File Size: 25 KB
Downloads: 1

How to open VCEX files?

Files with VCEX extension can be opened by ProfExam Simulator.

Demo Questions

Question 1
Your team recently released a new version of a highly consumed application to accommodate additional user traffic. Shortly after the release, you received an alert from your production monitoring team that there is consistently high replication lag between your primary instance and the read replicas of your Cloud SQL for MySQL instances. You need to resolve the replication lag.
What should you do?
  1. Identify and optimize slow running queries, or set parallel replication flags.
  2. Stop all running queries, and re-create the replicas.
  3. Edit the primary instance to upgrade to a larger disk, and increase vCPU count.
  4. Edit the primary instance to add additional memory.
Correct answer: C
Question 2
Your organization operates in a highly regulated industry. Separation of concerns (SoC) and security principle of least privilege (PoLP) are critical. The operations team consists of:
  • Person A is a database administrator.
  • Person B is an analyst who generates metric reports.
  • Application C is responsible for automatic backups.
You need to assign roles to team members for Cloud Spanner. Which roles should you assign?
  1. roles/spanner.databaseAdmin for Person A roles/spanner.databaseReader for Person B roles/spanner.backupWriter for Application C
  2. roles/spanner.databaseAdmin for Person A roles/spanner.databaseReader for Person B roles/spanner.backupAdmin for Application C
  3. roles/spanner.databaseAdmin for Person A roles/spanner.databaseUser for Person B roles/spanner databaseReader for Application C
  4. roles/spanner.databaseAdmin for Person A roles/spanner.databaseUser for Person B roles/spanner.backupWriter for Application C
Correct answer: B
Question 3
You are designing an augmented reality game for iOS and Android devices. You plan to use Cloud Spanner as the primary backend database for game state storage and player authentication. You want to track in-game rewards that players unlock at every stage of the game. During the testing phase, you discovered that costs are much higher than anticipated, but the query response times are within the SL
  1. You want to follow Google-recommended practices. You need the database to be performant and highly available while you keep costs low. What should you do?
  2. Manually scale down the number of nodes after the peak period has passed.
  3. Use interleaving to co-locate parent and child rows.
  4. Use the Cloud Spanner query optimizer to determine the most efficient way to execute the SQL query.
  5. Use granular instance sizing in Cloud Spanner and Autoscaler.
Correct answer: C
Question 4
You recently launched a new product to the US market. You currently have two Bigtable clusters in one US region to serve all the traffic. Your marketing team is planning an immediate expansion to APAC. You need to roll out the regional expansion while implementing high availability according to Google-recommended practices. What should you do?
  1. Maintain a target of 23% CPU utilization by locating: cluster-a in zone us-central1-a cluster-b in zone europe-west1-d cluster-c in zone asia-east1-b
  2. Maintain a target of 23% CPU utilization by locating: cluster-a in zone us-central1-a cluster-b in zone us-central1-b cluster-c in zone us-east1-a
  3. Maintain a target of 35% CPU utilization by locating: cluster-a in zone us-central1-a cluster-b in zone australia-southeast1-a cluster-c in zone europe-west1-d cluster-d in zone asia-east1-b
  4. Maintain a target of 35% CPU utilization by locating: cluster-a in zone us-central1-a cluster-b in zone us-central2-a cluster-c in zone asia-northeast1-b cluster-d in zone asia-east1-b
Correct answer: D
Question 5
Your ecommerce website captures user clickstream data to analyze customer traffic patterns in real time and support personalization features on your website. You plan to analyze this data using big data tools. You need a low-latency solution that can store 8 TB of data and can scale to millions of read and write requests per second. What should you do?
  1. Write your data into Bigtable and use Dataproc and the Apache Hbase libraries for analysis.
  2. Deploy a Cloud SQL environment with read replicas for improved performance. Use Datastream to export data to Cloud Storage and analyze with Dataproc and the Cloud Storage connector.
  3. Use Memorystore to handle your low-latency requirements and for real-time analytics.
  4. Stream your data into BigQuery and use Dataproc and the BigQuery Storage API to analyze large volumes of data.
Correct answer: B
Question 6
Your company uses Cloud Spanner for a mission-critical inventory management system that is globally available. You recently loaded stock keeping unit (SKU) and product catalog data from a company acquisition and observed hot-spots in the Cloud Spanner database. You want to follow Google-recommended schema design practices to avoid performance degradation. What should you do? (Choose two.)
  1. Use an auto-incrementing value as the primary key.
  2. Normalize the data model.
  3. Promote low-cardinality attributes in multi-attribute primary keys.
  4. Promote high-cardinality attributes in multi-attribute primary keys.
  5. Use bit-reverse sequential value as the primary key.
Correct answer: AD
Question 7
You are managing multiple applications connecting to a database on Cloud SQL for PostgreSQL. You need to be able to monitor database performance to easily identify applications with long-running and resource-intensive queries. What should you do?
  1. Use log messages produced by Cloud SQL.
  2. Use Query Insights for Cloud SQL.
  3. Use the Cloud Monitoring dashboard with available metrics from Cloud SQL.
  4. Use Cloud SQL instance monitoring in the Google Cloud Console.
Correct answer: C
Question 8
You are building an application that allows users to customize their website and mobile experiences.
The application will capture user information and preferences. User profiles have a dynamic schema, and users can add or delete information from their profile. You need to ensure that user changes automatically trigger updates to your downstream BigQuery data warehouse. What should you do?
  1. Store your data in Bigtable, and use the user identifier as the key. Use one column family to store user profile data, and use another column family to store user preferences.
  2. Use Cloud SQL, and create different tables for user profile data and user preferences from your recommendations model. Use SQL to join the user profile data and preferences
  3. Use Firestore in Native mode, and store user profile data as a document. Update the user profile with preferences specific to that user and use the user identifier to query.
  4. Use Firestore in Datastore mode, and store user profile data as a document. Update the user profile with preferences specific to that user and use the user identifier to query.
Correct answer: A
Question 9
Your application uses Cloud SQL for MySQL. Your users run reports on data that relies on near-real time; however, the additional analytics caused excessive load on the primary database. You created a read replica for the analytics workloads, but now your users are complaining about the lag in data changes and that their reports are still slow. You need to improve the report performance and shorten the lag in data replication without making changes to the current reports. Which two approaches should you implement? (Choose two.)
  1. Create secondary indexes on the replica.
  2. Create additional read replicas, and partition your analytics users to use different read replicas.
  3. Disable replication on the read replica, and set the flag for parallel replication on the read replica.Re-enable replication and optimize performance by setting flags on the primary instance.
  4. Disable replication on the primary instance, and set the flag for parallel replication on the primary instance. Re-enable replication and optimize performance by setting flags on the read replica.
  5. Move your analytics workloads to BigQuery, and set up a streaming pipeline to move data and update BigQuery.
Correct answer: BE
Question 10
You are evaluating Cloud SQL for PostgreSQL as a possible destination for your on-premises PostgreSQL instances. Geography is becoming increasingly relevant to customer privacy worldwide.
Your solution must support data residency requirements and include a strategy to: configure where data is stored control where the encryption keys are stored govern the access to data What should you do?
  1. Replicate Cloud SQL databases across different zones.
  2. Create a Cloud SQL for PostgreSQL instance on Google Cloud for the data that does not need to adhere to data residency requirements. Keep the data that must adhere to data residency requirements on-premises. Make applicationchanges to support both databases.
  3. Allow application access to data only if the users are in the same region as the Google Cloud region for the Cloud SQL for PostgreSQL database.
  4. Use features like customer-managed encryption keys (CMEK), VPC Service Controls, and Identity and Access Management (IAM) policies.
Correct answer: C
HOW TO OPEN VCE FILES

Use VCE Exam Simulator to open VCE files
Avanaset

HOW TO OPEN VCEX AND EXAM FILES

Use ProfExam Simulator to open VCEX and EXAM files
ProfExam Screen

ProfExam
ProfExam at a 20% markdown

You have the opportunity to purchase ProfExam at a 20% reduced price

Get Now!