Download Oracle Essbase 11 Essentials.1z0-531.SelfTestEngine.2018-10-25.42q.vcex

Vendor: Oracle
Exam Code: 1z0-531
Exam Name: Oracle Essbase 11 Essentials
Date: Oct 25, 2018
File Size: 874 KB

How to open VCEX files?

Files with VCEX extension can be opened by ProfExam Simulator.

Demo Questions

Question 1
You have the following analysis requirement. Products roll up to Product Family which rolls up to Product Category. You also need to group Products by Product 
Manager. Product Managers may manage one or more Products across product families. You do not need to create reports with Product Manager by Product 
Family. You need to secure products by Product Manager for planning submissions. 
You consider Shared members as a solution because of which two options? 
  1. Shared members provide cross tab reporting (Product Manager in the rows and Product Family across the column)
  2. Shared members provide additional categorization but results in a smaller database then if you were to add Product Manager as a separate dimension
  3. You can assign security to shared members
  4. Shared members can be assigned to sparse members only
Correct answer: BC
Explanation:
The data values associated with a shared member come from another member with the same name. The shared member stores a pointer to data contained in the other member, and the data is stored only once. To define a member as shared, an actual nonshared member of the same name must exist. Using shared members lets you use members repeatedly throughout a dimension. Essbase stores the data value only once, but it displays in multiple locations. Storing the data value only once saves space and improves processing efficiency. (B) Shared members must be in the same dimension. Data can be shared by multiple members. Incorrect answers:A: Attributes, not shared members, offers cross-tab, reportingD. Shared member can be assigned to both dense and sparse members.
The data values associated with a shared member come from another member with the same name. The shared member stores a pointer to data contained in the other member, and the data is stored only once. To define a member as shared, an actual nonshared member of the same name must exist. 
Using shared members lets you use members repeatedly throughout a dimension. Essbase stores the data value only once, but it displays in multiple locations. 
Storing the data value only once saves space and improves processing efficiency. (B) 
Shared members must be in the same dimension. Data can be shared by multiple members. 
Incorrect answers:
A: Attributes, not shared members, offers cross-tab, reporting
D. Shared member can be assigned to both dense and sparse members.
Question 2
Identify four disadvantages / considerations when using a transparent partition.
  1. Old data 
  2. Slow retrievals
  3. Slow calculations if referencing dynamic calc members in the source
  4. Outline sync complexities
  5. Increased network load
  6. Downtime required to sync data
Correct answer: BCDE
Explanation:
Disadvantages of Transparent Partitions * Outline synchronization is required (D) If you make changes to one outline, the two outlines are no longer synchronized. Although Essbase makes whatever changes it can to replicated and transparent partitions when the outlines are not synchronized, Essbase may not be able to make the data in the data source available in the data target. Essbase tracks changes that you make to block storage outlines and provides tools to keep your block storage outlines synchronized. Note:Essbase does not enable automatic synchronization of aggregate storage outlines. You must manually make the same changes to the source and target outlines. * Transparent partitions increase network activity, because Essbase transfers the data at the data source across the network to the data target. Increased network activity results in slower retrieval times for users. (E) * Because more users are accessing the data source, retrieval time may be slower. (B) * If the data source fails, users at both the data source and the data target are affected. Therefore, the network and data source must be available whenever users at the data source or data target need them. * (C) When you perform a calculation on a transparent partition, Essbase performs the calculation using the current values of the local data and transparent dependents. Essbase does not recalculate the values of transparent dependents, because the outlines for the data source and the data target may be so different that such a calculation is inaccurate. To calculate all partitions, issue a CALC ALL command for each individual partition, and then perform a CALC ALL command at the top level using the new values for each partition. * Formulas assigned to members in the data source may produce calculated results that are inconsistent with formulas or consolidations defined in the data target, and vice versa. Note: Advantages of Transparent PartitionsTransparent partitions can solve many database problems, but transparent partitions are not always the ideal partition type. * You need less disk space, because you are storing the data in one database. * The data accessed from the data target is always the latest version. (not A) * When the user updates the data at the data source, Essbase makes those changes at the data target. * Individual databases are smaller, so they can be calculated more quickly. * The distribution of the data is invisible to the end user and the end user’s tools. * You can load the data from either the data source or data target. * You can enable write-back functionality for aggregate storage databases by creating a transparent partition between an aggregate storage database as the source and a block storage database as the target.
Disadvantages of Transparent Partitions 
* Outline synchronization is required (D) 
If you make changes to one outline, the two outlines are no longer synchronized. Although Essbase makes whatever changes it can to replicated and transparent partitions when the outlines are not synchronized, Essbase may not be able to make the data in the data source available in the data target. 
Essbase tracks changes that you make to block storage outlines and provides tools to keep your block storage outlines synchronized. 
Note:
Essbase does not enable automatic synchronization of aggregate storage outlines. You must manually make the same changes to the source and target outlines. 
* Transparent partitions increase network activity, because Essbase transfers the data at the data source across the network to the data target. Increased network activity results in slower retrieval times for users. (E) 
* Because more users are accessing the data source, retrieval time may be slower. (B) 
* If the data source fails, users at both the data source and the data target are affected. Therefore, the network and data source must be available whenever users at the data source or data target need them. 
* (C) When you perform a calculation on a transparent partition, Essbase performs the calculation using the current values of the local data and transparent dependents. Essbase does not recalculate the values of transparent dependents, because the outlines for the data source and the data target may be so different that such a calculation is inaccurate. To calculate all partitions, issue a CALC ALL command for each individual partition, and then perform a CALC ALL command at the top level using the new values for each partition. 
* Formulas assigned to members in the data source may produce calculated results that are inconsistent with formulas or consolidations defined in the data target, and vice versa. 
Note: Advantages of Transparent Partitions
Transparent partitions can solve many database problems, but transparent partitions are not always the ideal partition type. 
* You need less disk space, because you are storing the data in one database. 
* The data accessed from the data target is always the latest version. (not A) 
* When the user updates the data at the data source, Essbase makes those changes at the data target. 
* Individual databases are smaller, so they can be calculated more quickly. 
* The distribution of the data is invisible to the end user and the end user’s tools. 
* You can load the data from either the data source or data target. 
* You can enable write-back functionality for aggregate storage databases by creating a transparent partition between an aggregate storage database as the source and a block storage database as the target.
Question 3
Assuming Sales and Year are sparse and Actual is dense, what two actions will the following calc script perform? 
FIX (Actual, @CY, Sales) 
DATAEXPORT "BINFILE" "data.txt"; 
ENDFIX
  1. Export the data for actual, current year, sales into a text file called data.txt
  2. Export the data for actual, current year into a text file called data.txt
  3. Export data blocks in a compressed encrypted format
  4. Create a text file that can be imported using the DATAIMPORTBIN calc command in another database that has different dimensionality
Correct answer: AC
Explanation:
The FIX…ENDFIX command block restricts database calculations to a subset of the database. All commands nested between the FIX and ENDFIX statements are restricted to the specified database subset. Syntax:FIX (fixMbrs) COMMANDS ; ENDFIX fixMbrs: A member name or list of members from any number of database dimensions.DATAEXPORT writes data to a text file, binary file, or as direct input to a relational file using ODBC. The data blocks will be saved in a compressed encrypted format to a text file. For a binary output file:DATAEXPORT "Binfile" "fileName" Incorrect answers:The Sales dimension is included as well. Use the DATAIMPORTBIN command to import a previously exported binary export file. However, the data cannot be imported into another database with a different dimensionality.
The FIX…ENDFIX command block restricts database calculations to a subset of the database. All commands nested between the FIX and ENDFIX statements are restricted to the specified database subset. 
Syntax:
FIX (fixMbrs) 
COMMANDS ; 
ENDFIX 
fixMbrs: A member name or list of members from any number of database dimensions.
DATAEXPORT writes data to a text file, binary file, or as direct input to a relational file using ODBC. The data blocks will be saved in a compressed encrypted format to a text file. 
For a binary output file:
DATAEXPORT "Binfile" "fileName" 
Incorrect answers:
  • The Sales dimension is included as well. 
  • Use the DATAIMPORTBIN command to import a previously exported binary export file. 
However, the data cannot be imported into another database with a different dimensionality.
Question 4
A calculation script is performed on a database for which Create Block on Equation is OFF. The command SET CREATEBLOCKONEQ ON is issued immediately before an equation in the script. Which statement accurately describe when blocks will be created?
  1. Blocks will be created ONLY when the equation assigns non-constant values to members of a sparse dimension
  2. Blocks will be created ONLY when the equation assigns constant values to members of a sparse dimension
  3. Blocks will be created when the equation assigns either constant or non-constant values to members of a sparse dimension.
  4. No blocks will be created.
Correct answer: C
Explanation:
C: Blocks are always (whether or not CREATEBLOCKONEQ is ON or OFF) created when a constant value is assigned to a member of a sparse dimension (for which a block does not exist). When SET CREATEBLOCKONEQ ON blocks will also be created when an non-constant value is assigned to a member of a sparse dimension (for which a block does not exist) in a new block. Note: If this would be a select two alternative question, the alternatives would have to be reworded slightly differently.Note #1:The SET CREATEBLOCKONEQ command controls, within a calculation script, whether or not new blocks are created when a calculation formula assigns anything other than a constant to a member of a sparse dimension. SET CREATEBLOCKONEQ overrides the Create Block on Equation setting for the database. Syntax: SET CREATEBLOCKONEQ ON|OFF;ON: When a calculation formula assigns a non-constant value to a member of a sparse dimension for which a block does not exist, Analytic Services creates anew block. Note #2: The Create Blocks on Equation setting is a database property. The initial value for the Create Blocks on Equation setting is OFF; no new blocks are created when something other than a constant is assigned to a sparse dimension member. You can use Administration Services or MaxL to set the Create Blocks on Equation setting to ON at the database-level. For more information about enabling the Create Blocks on Equation property for a database, see MaxL documentation in the Technical Reference or Administration Services online help. For more specific control, you can use the SET CREATEBLOCKONEQ calculation command within a calculation script to control creation of new blocks at the time the command is encountered in the script. Use of the SET CREATEBLOCKONEQ calculation command has the following characteristics:When Analytic Services encounters a SET CREATEBLOCKONEQ command within a calculation script, Analytic Services ignores the database-level setting. Where needed in the calculation script, you can use multiple SET CREATEBLOCKONEQ commands to define the Create Blocks on Equation setting value for the calculations that follow each command. The value set by the SET CREATEBLOCKONEQ command stays in affect until the next SET CREATEBLOCKONEQ command is processed or the calculation script is finished. Reference: SET CREATEBLOCKONEQ
C: Blocks are always (whether or not CREATEBLOCKONEQ is ON or OFF) created when a constant value is assigned to a member of a sparse dimension (for which a block does not exist). When SET CREATEBLOCKONEQ ON blocks will also be created when an non-constant value is assigned to a member of a sparse dimension (for which a block does not exist) in a new block. 
Note: If this would be a select two alternative question, the alternatives would have to be reworded slightly differently.
Note #1:
The SET CREATEBLOCKONEQ command controls, within a calculation script, whether or not new blocks are created when a calculation formula assigns anything other than a constant to a member of a sparse dimension. SET CREATEBLOCKONEQ overrides the Create Block on Equation setting for the database. 
Syntax: SET CREATEBLOCKONEQ ON|OFF;
ON: When a calculation formula assigns a non-constant value to a member of a sparse dimension for which a block does not exist, Analytic Services creates anew block. 
Note #2: The Create Blocks on Equation setting is a database property. The initial value for the Create Blocks on Equation setting is OFF; no new blocks are created when something other than a constant is assigned to a sparse dimension member. You can use Administration Services or MaxL to set the Create Blocks on Equation setting to ON at the database-level. For more information about enabling the Create Blocks on Equation property for a database, see MaxL documentation in the Technical Reference or Administration Services online help. 
For more specific control, you can use the SET CREATEBLOCKONEQ calculation command within a calculation script to control creation of new blocks at the time the command is encountered in the script. Use of the SET CREATEBLOCKONEQ calculation command has the following characteristics:
  • When Analytic Services encounters a SET CREATEBLOCKONEQ command within a calculation script, Analytic Services ignores the database-level setting. 
  • Where needed in the calculation script, you can use multiple SET CREATEBLOCKONEQ commands to define the Create Blocks on Equation setting value for the calculations that follow each command. 
  • The value set by the SET CREATEBLOCKONEQ command stays in affect until the next SET CREATEBLOCKONEQ command is processed or the calculation script is finished. 
Reference: SET CREATEBLOCKONEQ
Question 5
Market size is an attribute dimension with the following members: Large, Medium, and Small.
Which of the following options below represent valid syntax statements in a calc script?
  1. FIX (@ATTRIBUTE(Large))
  2. Calc Dim (Accounts, Markets, "Market Size");
  3. Calc Dim (Accounts, Markets, Market Size);
  4. FIX(Large)
Correct answer: AB
Explanation:
For example, using Sample Basic, assume this statement is in a calculation script:.. FIX (@children(january)) CALC DIM (Accounts, Product, Market) ENDFIX    
For example, using Sample Basic, assume this statement is in a calculation script:
.. FIX (@children(january)) 
CALC DIM (Accounts, Product, Market) 
ENDFIX 
  
Question 6
Moving a stored entity member in a sparse dimension causes_________.
  1. a Full restructure
  2. an Index restructure
  3. an Outline restructure
  4. No restructure
Correct answer: B
Explanation:
If a member of a sparse dimension is moved, deleted, or added, Essbase restructures the index and creates new index files. Restructuring the index is relatively fast; the time required depends on the index size.
If a member of a sparse dimension is moved, deleted, or added, Essbase restructures the index and creates new index files. Restructuring the index is relatively fast; the time required depends on the index size.
Question 7
During a multidimensional analysis getting data from a supplemental data source is an example of________.
  1. Drill across
  2. Drill Through
  3. Trending
  4. Pivoting
Correct answer: A
Question 8
Identify the two true statements about expense reporting tags.
  1. Provide accurate time balance calculations
  2. Provide accurate variance reporting on revenue and expense accounts
  3. Are assigned to the dimension tagged Time
  4. Are assigned to the dimension tagged Accounts
  5. Are assigned to the Dimension containing variance members.
Correct answer: BD
Explanation:
B: The variance reporting calculation requires that any item that represents an expense to the company must have an expense reporting tag.Essbase provides two variance reporting properties: expense and non-expense. The default is non-expense.Variance reporting properties define how Essbase calculates the difference between actual and budget data in members with the @VAR or @VARPER function in their member formulas. D: Expense reporting is tagged to the accounts dimension such that variance, profit etc. Member will not show the negative value when we calculate it. Note: The first, last, average, and expense tags are available exclusively for use with accounts dimension members.
B: The variance reporting calculation requires that any item that represents an expense to the company must have an expense reporting tag.
Essbase provides two variance reporting properties: expense and non-expense. The default is non-expense.
Variance reporting properties define how Essbase calculates the difference between actual and budget data in members with the @VAR or @VARPER function in their member formulas. 
D: Expense reporting is tagged to the accounts dimension such that variance, profit etc. 
Member will not show the negative value when we calculate it. 
Note: The first, last, average, and expense tags are available exclusively for use with accounts dimension members.
Question 9
You are building a sales analysis model. In this model there is no requirement for calculation. The user needs to aggregate data across all dimensions and wants to 
archive many years of data. Archived data will be analyzed once in while. 
What types of cube would you build using Essbase for this kind of requirement? 
  1. Block Storage
  2. XOLAP
  3. Aggregate Storage
  4. Virtual Cube
Correct answer: C
Explanation:
Consider using the aggregate storage storage model if the following is true for your database:The database is sparse and has many dimensions, and/or the dimensions have many levels of members. The database is used primarily for read-only purposes, with few or no data updates. (C) The outline contains no formulas except in the dimension tagged as Accounts. Calculation of the database is frequent, is based mainly on summation of the data, and does not rely on calculation scripts.
Consider using the aggregate storage storage model if the following is true for your database:
  • The database is sparse and has many dimensions, and/or the dimensions have many levels of members. 
  • The database is used primarily for read-only purposes, with few or no data updates. (C) 
  • The outline contains no formulas except in the dimension tagged as Accounts. 
  • Calculation of the database is frequent, is based mainly on summation of the data, and does not rely on calculation scripts.
Question 10
The data block density for a particular BSO database is between 10% and 90%, and data values within the block do not consecutively repeat. 
Which type of compression would be most appropriate to use?
  1. Bitmap
  2. RLE
  3. ZLIB 
  4. No compression required
Correct answer: A
Explanation:
Bitmap is good for non-repeating data. It will use Bitmap or IVP (Index Value Pair). Note: Bitmap compression, the default. Essbase stores only non-missing values and uses a bitmapping scheme. A bitmap uses one bit for each cell in the data block, whether the cell value is missing or non-missing. When a data block is not compressed, Essbase uses 8 bytes to store every non-missing cell. In most cases, bitmap compression conserves disk space more efficiently. However, much depends on the configuration of the data. Incorrect answers:RLE: You should change to RLE compression when the block density is < 3% (or if you have all the same values in the database - lots of zeros).Note: RLE (Run Length Encoding) is a good compression type when your data has many zeros (block density low) or often repeats. RLE uses multiple compression compressions (one per block). RLE will use RLE, bitmap or IVP.
Bitmap is good for non-repeating data. It will use Bitmap or IVP (Index Value Pair). 
Note: Bitmap compression, the default. Essbase stores only non-missing values and uses a bitmapping scheme. A bitmap uses one bit for each cell in the data block, whether the cell value is missing or non-missing. When a data block is not compressed, Essbase uses 8 bytes to store every non-missing cell. In most cases, bitmap compression conserves disk space more efficiently. However, much depends on the configuration of the data. 
Incorrect answers:
RLE: You should change to RLE compression when the block density is < 3% (or if you have all the same values in the database - lots of zeros).
Note: RLE (Run Length Encoding) is a good compression type when your data has many zeros (block density low) or often repeats. RLE uses multiple compression compressions (one per block). RLE will use RLE, bitmap or IVP.
HOW TO OPEN VCE FILES

Use VCE Exam Simulator to open VCE files
Avanaset

HOW TO OPEN VCEX AND EXAM FILES

Use ProfExam Simulator to open VCEX and EXAM files
ProfExam Screen

ProfExam
ProfExam at a 20% markdown

You have the opportunity to purchase ProfExam at a 20% reduced price

Get Now!