Download Data Engineering on Microsoft Azure (beta).dump4pass.DP-203.2022-04-07.5e.199q.vcex


Download Exam

File Info

Exam Data Engineering on Microsoft Azure (beta)
Number DP-203
File Name Data Engineering on Microsoft Azure (beta).dump4pass.DP-203.2022-04-07.5e.199q.vcex
Size 12.63 Mb
Posted April 07, 2022
Downloads 17



How to open VCEX & EXAM Files?

Files with VCEX & EXAM extensions can be opened by ProfExam Simulator.

Purchase
Coupon: EXAMFILESCOM

Coupon: EXAMFILESCOM
With discount: 20%


 
 



Demo Questions

Question 1
You are creating dimensions for a data warehouse in an Azure Synapse Analytics dedicated SQL pool.  
You create a table by using the Transact-SQL statement shown in the following exhibit.  

        

Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.  
NOTE: Each correct selection is worth one point. 




Question 2
You are designing a fact table named FactPurchase in an Azure Synapse Analytics dedicated SQL pool. The table contains purchases from suppliers for a retail store. FactPurchase will contain the following columns.  
  
        
  
FactPurchase will have 1 million rows of data added daily and will contain three years of data.  
Transact-SQL queries similar to the following query will be executed daily.  
SELECT  
SupplierKey, StockItemKey, COUNT(*)  
FROM FactPurchase  
WHERE DateKey >= 20210101  
AND DateKey <= 20210131  
GROUP By SupplierKey, StockItemKey  
  
Which table distribution will minimize query times?

  • A: replicated
  • B: hash-distributed on PurchaseKey
  • C: round-robin
  • D: hash-distributed on DateKey



Question 3
You have a table in an Azure Synapse Analytics dedicated SQL pool. The table was created by using the following Transact-SQL statement.  
  
        
  
You need to alter the table to meet the following requirements:   
Ensure that users can identify the current manager of employees.  
Support creating an employee reporting hierarchy for your entire company.  
Provide fast lookup of the managers’ attributes such as name and job title.  
Which column should you add to the table?

  • A: [ManagerEmployeeID] [int] NULL 
  • B: [ManagerEmployeeID] [smallint] NULL
  • C: [ManagerEmployeeKey] [int] NULL 
  • D: [ManagerName] [varchar](200) NULL



Question 4
You have an Azure Synapse workspace named MyWorkspace that contains an Apache Spark database named mytestdb.    
You run the following command in an Azure Synapse Analytics Spark pool in MyWorkspace.    
CREATE TABLE mytestdb.myParquetTable(  
EmployeeID int,  
EmployeeName string,  
EmployeeStartDate date)  
USING Parquet  
  
You then use Spark to insert a row into mytestdb.myParquetTable. The row contains the following data.  

        

One minute later, you execute the following query from a serverless SQL pool in MyWorkspace.  
SELECT EmployeeID  
FROM mytestdb.dbo.myParquetTable  
WHERE name = 'Alice';  
What will be returned by the query?

  • A: 24
  • B: an error
  • C: a null value



Question 5
You have a table named SalesFact in an enterprise data warehouse in Azure Synapse Analytics. SalesFact contains sales data from the past 36 months and has the following characteristics: 
Is partitioned by month  
Contains one billion rows  
Has clustered columnstore indexes  
At the beginning of each month, you need to remove data from SalesFact that is older than 36 months as quickly as possible.  
Which three actions should you perform in sequence in a stored procedure? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.   




Question 6
You have files and folders in Azure Data Lake Storage Gen2 for an Azure Synapse workspace as shown in the following exhibit.  
  
        
   
You create an external table named ExtTable that has LOCATION='/topfolder/'.    
When you query ExtTable by using an Azure Synapse Analytics serverless SQL pool, which files are returned?

  • A: File2.csv and File3.csv only 
  • B: File1.csv and File4.csv only
  • C: File1.csv, File2.csv, File3.csv, and File4.csv
  • D: File1.csv only



Question 7
You are planning the deployment of Azure Data Lake Storage Gen2.  
You have the following two reports that will access the data lake: 
Report1: Reads three columns from a file that contains 50 columns. 
Report2: Queries a single record based on a timestamp.   
You need to recommend in which format to store the data in the data lake to support the reports. The solution must minimize read times.    
What should you recommend for each report? To answer, select the appropriate options in the answer area.    
NOTE: Each correct selection is worth one point. 




Question 8
You are designing the folder structure for an Azure Data Lake Storage Gen2 container.    
Users will query data by using a variety of services including Azure Databricks and Azure Synapse Analytics serverless SQL pools. The data will be secured by subject area. Most queries will include data from the current year or current month.    
Which folder structure should you recommend to support fast queries and simplified folder security? 

  • A: /{SubjectArea}/{DataSource}/{DD}/{MM}/{YYYY}/{FileData}_{YYYY}_{MM}_{DD}.csv
  • B: /{DD}/{MM}/{YYYY}/{SubjectArea}/{DataSource}/{FileData}_{YYYY}_{MM}_{DD}.csv 
  • C: /{YYYY}/{MM}/{DD}/{SubjectArea}/{DataSource}/{FileData}_{YYYY}_{MM}_{DD}.csv
  • D: /{SubjectArea}/{DataSource}/{YYYY}/{MM}/{DD}/{FileData}_{YYYY}_{MM}_{DD}.csv



Question 9
You need to output files from Azure Data Factory.    
Which file format should you use for each type of output? To answer, select the appropriate options in the answer area.    
NOTE: Each correct selection is worth one point. 




Question 10
You use Azure Data Factory to prepare data to be queried by Azure Synapse Analytics serverless SQL pools.    
Files are initially ingested into an Azure Data Lake Storage Gen2 account as 10 small JSON files. Each file contains the same data attributes and data from a subsidiary of your company.    
You need to move the files to a different folder and transform the data to meet the following requirements:   
Provide the fastest possible query times.  
Automatically infer the schema from the underlying files.    
How should you configure the Data Factory copy activity? To answer, select the appropriate options in the answer area.    
NOTE: Each correct selection is worth one point. 









ProfExam
PROFEXAM WITH A 20% DISCOUNT

You can buy ProfExam with a 20% discount..

Get Now!


HOW TO OPEN VCEX AND EXAM FILES

Use ProfExam Simulator to open VCEX and EXAM files
ProfExam Screen



HOW TO OPEN VCE FILES

Use VCE Exam Simulator to open VCE files
Avanaset