Download Cloudera Certified Administrator for Apache Hadoop (CCAH) CDH5 Upgrade.CCA-505.PassLeader.2019-08-07.21q.vcex

Vendor: Cloudera
Exam Code: CCA-505
Exam Name: Cloudera Certified Administrator for Apache Hadoop (CCAH) CDH5 Upgrade
Date: Aug 07, 2019
File Size: 74 KB
Downloads: 1

How to open VCEX files?

Files with VCEX extension can be opened by ProfExam Simulator.

Demo Questions

Question 1
On a cluster running CDH 5.0 or above, you use the hadoop fs –put command to write a 300MB file into a previously empty directory using an HDFS block of 64MB. Just after this command has finished writing 200MB of this file, what would another use see when they look in the directory?
  1. They will see the file with its original nam
  2. if they attempt to view the file, they will get a ConcurrentFileAccessException until the entire file write is completed on the cluster
  3. They will see the file with a ._COPYING_extension on its nam
  4. If they attempt to view the file, they will get a ConcurrentFileAccessException until the entire file write is completed on the cluster.
  5. They will see the file with a ._COPYING_ extension on its nam
  6. if they view the file, they will see contents of the file up to the last completed block (as each 64MB block is written, that block becomes available)
  7. The directory will appear to be empty until the entire file write is completed on the cluster
Correct answer: C
Question 2
You want to understand more about how users browse you public website. For example, you want to know which pages they visit prior to placing an order. You have a server farm of 200 web servers hosting your website. Which is the most efficient process to gather these web server logs into your Hadoop cluster for analysis?
  1. Sample the web server logs web servers and copy them into HDFS using curl
  2. Ingest the server web logs into HDFS using Flume
  3. Import all users clicks from your OLTP databases into Hadoop using Sqoop
  4. Write a MApReduce job with the web servers from mappers and the Hadoop cluster nodes reducers
  5. Channel these clickstream into Hadoop using Hadoop Streaming
Correct answer: AB
Question 3
Assume you have a file named foo.txt in your local directory. You issue the following three commands:
Hadoop fs –mkdir input
Hadoop fs –put foo.txt input/foo.txt
Hadoop fs –put foo.txt input
What happens when you issue that third command?
  1. The write succeeds, overwriting foo.txt in HDFS with no warning
  2. The write silently fails
  3. The file is uploaded and stored as a plain named input
  4. You get an error message telling you that input is not a directory
  5. You get a error message telling you that foo.txt already exist
  6. The file is not written to HDFS
  7. You get an error message telling you that foo.txt already exists, and asking you if you would like to overwrite
  8. You get a warning that foo.txt is being overwritten
Correct answer: E
Question 4
You are migrating a cluster from MapReduce version 1 (MRv1) to MapReduce version2 (MRv2) on YARN. To want to maintain your MRv1 TaskTracker slot capacities when you migrate. What should you do?
  1. Configure yarn.applicationmaster.resource.memory-mb and yarn.applicationmaster.cpu- vcores so that ApplicationMaster container allocations match the capacity you require.
  2. You don’t need to configure or balance these properties in YARN as YARN dynamically balances resource management capabilities on your cluster
  3. Configure yarn.nodemanager.resource.memory-mb and yarn.nodemanager.resource.cpu-vcores to match the capacity you require under YARN for each NodeManager
  4. Configure mapred.tasktracker.map.tasks.maximum and mapred.tasktracker.reduce.tasks.maximum ub yarn.site.xml to match your cluster’s configured capacity set by yarn.scheduler.minimum-allocation
Correct answer: C
Question 5
Your cluster is configured with HDFS and MapReduce version 2 (MRv2) on YARN. What is the result when you execute: hadoop jar samplejar.jar MyClass on a client machine?
  1. SampleJar.jar is sent to the ApplicationMaster which allocation a container for Sample.jar
  2. SampleJar.Jar is serialized into an XML file which is submitted to the ApplicationMaster
  3. SampleJar.Jar is sent directly to the ResourceManager
  4. SampleJar.Jar is placed in a temporary directly in HDFS
Correct answer: A
Question 6
You have a 20 node Hadoop cluster, with 18 slave nodes and 2 master nodes running HDFS High Availability (HA). You want to minimize the chance of data loss in you cluster. What should you do?
  1. Add another master node to increase the number of nodes running the JournalNode which increases the number of machines available to HA to create a quorum
  2. Configure the cluster’s disk drives with an appropriate fault tolerant RAID level
  3. Run the ResourceManager on a different master from the NameNode in the order to load share HDFS metadata processing
  4. Run a Secondary NameNode on a different master from the NameNode in order to load provide automatic recovery from a NameNode failure
  5. Set an HDFS replication factor that provides data redundancy, protecting against failure
Correct answer: C
Question 7
You have a cluster running with the Fair Scheduler enabled. There are currently no jobs running on the cluster, and you submit a job A, so that only job A is running on the cluster. A while later, you submit Job B. now job A and Job B are running on the cluster at the same time. How will the Fair Scheduler handle these two jobs?
  1. When job A gets submitted, it consumes all the tasks slots.
  2. When job A gets submitted, it doesn’t consume all the task slots
  3. When job B gets submitted, Job A has to finish first, before job B can scheduled
  4. When job B gets submitted, it will get assigned tasks, while Job A continue to run with fewer tasks.
Correct answer: C
Question 8
You want a node to only swap Hadoop daemon data from RAM to disk when absolutely necessary. What should you do?
  1. Delete the /swapfile file on the node
  2. Set vm.swappiness to o in /etc/sysctl.conf
  3. Set the ram.swap parameter to o in core-site.xml
  4. Delete the /etc/swap file on the node
  5. Delete the /dev/vmswap file on the node
Correct answer: B
Question 9
You are configuring your cluster to run HDFS and MapReduce v2 (MRv2) on YARN. Which daemons need to be installed on your clusters master nodes? (Choose Two)
  1. ResourceManager
  2. DataNode
  3. NameNode
  4. JobTracker
  5. TaskTracker
  6. HMaster
Correct answer: AC
Question 10
You observe that the number of spilled records from Map tasks far exceeds the number of map output records. Your child heap size is 1GB and your io.sort.mb value is set to 100 MB. How would you tune your io.sort.mb value to achieve maximum memory to disk I/O ratio?
  1. Decrease the io.sort.mb value to 0
  2. Increase the io.sort.mb to 1GB
  3. For 1GB child heap size an io.sort.mb of 128 MB will always maximize memory to disk I/O
  4. Tune the io.sort.mb value until you observe that the number of spilled records equals (or is as close to equals) the number of map output records
Correct answer: D
HOW TO OPEN VCE FILES

Use VCE Exam Simulator to open VCE files
Avanaset

HOW TO OPEN VCEX AND EXAM FILES

Use ProfExam Simulator to open VCEX and EXAM files
ProfExam Screen

ProfExam
ProfExam at a 20% markdown

You have the opportunity to purchase ProfExam at a 20% reduced price

Get Now!