Download Cloudera Certified Administrator for Apache Hadoop (CCAH).selftestengine.CCA-500.2020-01-15.1e.30q.vcex


Download Exam

File Info

Exam Cloudera Certified Administrator for Apache Hadoop (CCAH)
Number CCA-500
File Name Cloudera Certified Administrator for Apache Hadoop (CCAH).selftestengine.CCA-500.2020-01-15.1e.30q.vcex
Size 211 Kb
Posted January 15, 2020
Downloads 3



How to open VCEX & EXAM Files?

Files with VCEX & EXAM extensions can be opened by ProfExam Simulator.

Purchase
Coupon: EXAMFILESCOM

Coupon: EXAMFILESCOM
With discount: 20%


 
 



Demo Questions

Question 1
You are configuring your cluster to run HDFS and MapReducer v2 (MRv2) on YARN. Which two daemons needs to be installed on your cluster’s master nodes? (Choose two)

  • A: HMaster
  • B: ResourceManager
  • C: TaskManager
  • D: JobTracker
  • E: NameNode
  • F: DataNode



Question 2
You are running a Hadoop cluster with a NameNode on host mynamenode, a secondary NameNode on host mysecondarynamenode and several DataNodes. 
Which best describes how you determine when the last checkpoint happened?

  • A: Execute hdfs namenode –report on the command line and look at the Last Checkpoint information
  • B: Execute hdfs dfsadmin –saveNamespace on the command line which returns to you the last checkpoint value in fstime file
  • C: Connect to the web UI of the Secondary NameNode (http://mysecondary:50090/) and look at the “Last Checkpoint” information
  • D: Connect to the web UI of the NameNode (http://mynamenode:50070) and look at the “Last Checkpoint” information



Question 3
What does CDH packaging do on install to facilitate Kerberos security setup?

  • A: Automatically configures permissions for log files at & MAPRED_LOG_DIR/userlogs
  • B: Creates users for hdfs and mapreduce to facilitate role assignment
  • C: Creates directories for temp, hdfs, and mapreduce with the correct permissions
  • D: Creates a set of pre-configured Kerberos keytab files and their permissions
  • E: Creates and configures your kdc with default cluster values



Question 4
Which three basic configuration parameters must you set to migrate your cluster from MapReduce 1 (MRv1) to MapReduce V2 (MRv2)? (Choose three)

  • A: Configure the NodeManager to enable MapReduce services on YARN by setting the following property in yarn-site.xml: 
    <name>yarn.nodemanager.hostname</name> 
    <value>your_nodeManager_shuffle</value>
  • B: Configure the NodeManager hostname and enable node services on YARN by setting the following property in yarn-site.xml: 
    <name>yarn.nodemanager.hostname</name> 
    <value>your_nodeManager_hostname</value>
  • C: Configure a default scheduler to run on YARN by setting the following property in mapred-site.xml: 
    <name>mapreduce.jobtracker.taskScheduler</name> 
    <Value>org.apache.hadoop.mapred.JobQueueTaskScheduler</value>
  • D: Configure the number of map tasks per jon YARN by setting the following property in mapred: 
    <name>mapreduce.job.maps</name> 
    <value>2</value>
  • E: Configure the ResourceManager hostname and enable node services on YARN by setting the following property in yarn-site.xml: 
    <name>yarn.resourcemanager.hostname</name> 
    <value>your_resourceManager_hostname</value>
  • F: Configure MapReduce as a Framework running on YARN by setting the following property in mapred-site.xml: 
    <name>mapreduce.framework.name</name> 
    <value>yarn</value>



Question 5
You need to analyze 60,000,000 images stored in JPEG format, each of which is approximately 25 KB. Because you Hadoop cluster isn’t optimized for storing and processing many small files, you decide to do the following actions: 
Group the individual images into a set of larger files 
Use the set of larger files as input for a MapReduce job that processes them directly with python using Hadoop streaming. 
Which data serialization system gives the flexibility to do this?

  • A: CSV
  • B: XML
  • C: HTML
  • D: Avro
  • E: SequenceFiles
  • F: JSON



Question 6
Identify two features/issues that YARN is designated to address: (Choose two)

  • A: Standardize on a single MapReduce API
  • B: Single point of failure in the NameNode
  • C: Reduce complexity of the MapReduce APIs
  • D: Resource pressure on the JobTracker
  • E: Ability to run framework other than MapReduce, such as MPI
  • F: HDFS latency



Question 7
Which is the default scheduler in YARN?

  • A: YARN doesn’t configure a default scheduler, you must first assign an appropriate scheduler class in yarn-site.xml
  • B: Capacity Scheduler
  • C: Fair Scheduler
  • D: FIFO Scheduler



Question 8
Which scheduler would you deploy to ensure that your cluster allows short jobs to finish within a reasonable time without starting long-running jobs?

  • A: Complexity Fair Scheduler (CFS)
  • B: Capacity Scheduler
  • C: Fair Scheduler
  • D: FIFO Scheduler



Question 9
Your cluster is configured with HDFS and MapReduce version 2 (MRv2) on YARN. What is the result when you execute: hadoop jar SampleJar MyClass on a client machine?

  • A: SampleJar.Jar is sent to the ApplicationMaster which allocates a container for SampleJar.Jar
  • B: Sample.jar is placed in a temporary directory in HDFS
  • C: SampleJar.jar is sent directly to the ResourceManager
  • D: SampleJar.jar is serialized into an XML file which is submitted to the ApplicatoionMaster



Question 10
You are working on a project where you need to chain together MapReduce, Pig jobs. You also need the ability to use forks, decision points, and path joins. Which ecosystem project should you use to perform these actions?

  • A: Oozie
  • B: ZooKeeper
  • C: HBase
  • D: Sqoop
  • E: HUE









CONNECT US


ProfExam
PROFEXAM WITH A 20% DISCOUNT

You can buy ProfExam with a 20% discount..

Get Now!


HOW TO OPEN VCEX AND EXAM FILES

Use ProfExam Simulator to open VCEX and EXAM files
ProfExam Screen



HOW TO OPEN VCE FILES

Use VCE Exam Simulator to open VCE files
Avanaset