These questions are similar to the ones asked in the actual Test.
How should I know? I know, because I have recently certified with the latest version of the Associate Certification test.
Before you start here are some Key features of the HANA Application Associate Certification Exam
– The exam is Computer based and you have three Hours to answer 80 Questions.
– The Questions are (mostly) multiple choice type and there is NO penalty for an incorrect answer.
– Some of the Questions have more than one correct answers. You must get ALL the options correct for you to be awarded points.
– The Official Pass percentage is 65% (But this can vary). You will be told the exact passing percentage before your begin your test.
Note: Unless stated otherwise, All questions have more than one correct answer.
Q1. Which of the following is NOT a replication method for data replication from a source system to SAP Hana?
a. ETL based
b. Trigger based
c. Time based
d. Log based
The figure below gives an overview of the alternative methods for data replication from a source system to the SAP HANA database.
Each method handles the required data replication differently, and consequently each method has different strengths.
It depends on your specific application field and the existing system landscape as to which of the methods best serves your needs.
Trigger Based Replication is based on capturing database changes at a high level of abstraction in the source ERP system.
This method of replication benefits from being database-independent, and can also parallelize database changes on multiple tables or by segmenting large table changes.
Extraction-Transformation-Load (ETL) Based Data Replication uses SAP BusinessObjects DataServices to specify and load the relevant business data in defined periods of time from an ERPsystem into the SAP HANA database.
You can reuse the ERP application logic by reading extractors or utilizing SAP function modules. In addition, the ETL-based method offers options for the integration of third-party data providers.
Transaction Log-Based Data Replication Using Sybase Replication is based on capturing table changes from low-level database log files. This method is database-dependent.
Q2. Which of the following statements are true?
a. There are four types of information views: attribute view, analytic view, hierarchy view and calculation view. All these views are non-materialized views.
b. An analytic view is used to model data that includes measures.
c. Calculated attributes are derived from one or more existing attributes or constants.
d. Calculation views can include measures and be used for multi- dimensional reporting or can contain no measures and used for list-type of reporting
There are three types of information views: attribute view, analytic view, and calculation view.
All three types of information views are non-materialized views. This creates agility through the rapid deployment of changes.
An attribute view is used to model an entity based on the relationships between attribute data contained in multiple source tables.
For example, customer ID is the attribute data that describes measures (that is, who purchased a product). However, customer ID has much more depth to it when joined with other attribute data that further describes the customer (customer address, customer relationship, customer status, customer hierarchy, and so on).
You create an attribute view to locate the attribute data and to define the relationships between the various tables to model how customer attribute data, for example, will be used to address business needs.
An analytic view is used to model data that includes measures. For example, an operational data mart representing sales order history would include measures for quantity, price, and so on.
The data foundation of an analytic view can contain multiple tables. However, measures that are selected for inclusion in an analytic view must originate from only one of these tables (for business requirements that include measure sourced from multiple source tables, see calculation view).
Analytic views can be simply a combination of tables that contain both attribute data and measure data.
For example, a report requiring the following:
Customer_ID, Order_Number, Product_ID, Quantity_Ordered, Quantity_Shipped
Optionally, attribute views can also be included in the analytic view definition. In this way, you can achieve additional depth of attribute data can be achieved. The analytic view inherits the definitions of any attribute views that are included in the definition.
Product_ID/Product_Name/Product_Hierarchy Quantity_Ordered Quantity_Shipped
A calculation view is used to define more advanced slices on the data in SAP HANA database. Calculation views can be simple and mirror the functionality found in both attribute views and analytic views.
However, they are typically used when the business use case requires advanced logic that is not covered in the previous types of information views.
For example, calculation views can have layers of calculation logic, can include measures sourced from multiple source tables, can include advanced SQL logic, and so on. The data foundation of the calculation view can include any combination of tables, column views, attribute views and analytic views. You can create joins, unions, projections, and aggregation levels on the sources.
Calculation views can include measures and be used for multi-dimensional reporting or can contain no measures and used for list-type of reporting. Calculation views can either be created using a graphical editor or using a SQL editor. These various options provide maximum flexibility for the most complex and comprehensive business requirements.
Q3. HANA supports which of the following hierarchies?
More than one answer may be correct.
a. Time Based hierarchies
b. Level Hierarchies
c. Parent/Child Hierarchies
d. Matrix organization Hierarchies
Hierarchies are used to structure and define the relationship between attributes of attribute views and calculation views that are used for business analysis. Exposed models that consist of attributes in hierarchies simplify the generation of reports.
For example, consider the TIME attribute view with YEAR, QUARTER, and MONTH attributes. You can use these YEAR, QUARTER, and MONTH attributes to define a hierarchy for the TIME attribute view as follows:
The following types of hierarchies are supported
Level Hierarchies are hierarchies that are rigid in nature, where the root and the child nodes can be accessed only in the defined order. For example, organizational structures, and so on.
Value hierarchies are hierarchies that are very similar to BOM (parent and child) and Employee Master (Employee and Manager). The hierarchy can be explored based on a selected parent, and there are cases where the child can be a parent. This hierarchy is derived based on the value.
Q4. SAP HANA modeler is a graphical data modeling tool which allows you to design analytical models and later analytical privileges, that governs the access to those models.
Which of the following represents a logic flow of activities?
a. Import source system metadata -> Create Information Models -> Provision Data -> Deploy -> Consume
b. Import source system metadata -> Provision Data ->Create Information Models -> Deploy -> Consume
c. Import source system metadata -> Provision Data -> Deploy -> Create Information Models -> Consume
d. Import source system metadata -> Deploy ->Create Information Models -> Provision Data -> Consume
The figure below shows the series of activities in a logical flow.
Q5. Of the two queries below, which is more efficient?
a. Matmoves = SELECT * FROM MSEG
FOR EACH matmove in matmoves
IF matmove.whichPlant = “I” THEN
Plant_text = SELECT plant_text FROM WERKS WHERE id = matmove.plant
Plant_text = SELECT plant_text FROM WERKS_EXT WHERE id=matmove.plant
b. If which_plant = “I” THEN
Plant_text = plant_int_text
Plant_text = plant_ext_text
The code below is an example of a poor query:
The query below is an example of a more efficient way of getting the same data. It is more efficient as it is querying against set of data, rather than all of the data.
It is very important to avoid loops when modeling as this will cause very poor performance. This is especially true when the amount of data in the table that will be looped upon is very high (hundreds of millions of data entries).
Q6 Which of the following statements are true?
a. One table may appear in exactly one schema
b. It is possible to grant authorizations on a table level.
c. Authorizations granted on a schema are automatically propagated to all objects within the schema
Many other database systems distinguish among databases, catalogs, and schemas, with a database forming a physical storage (typically in files) , catalogs and schemas being merely namespaces (catalogs contain schemas).
Often, a private schema is created with every user account.
The SAP HANA database does store schemas in separate databases. This is not an issue as there is no need to distinguish between different physical storage because all the information in the system is kept in memory.
An SAP HANA appliance consists of exactly one database which in turn has exactly one catalog.
Schemas are separate namespaces; i.e. the same table or view name may appear in multiple schemas. It is possible to grant authorizations on schema, table or view level.
Authorizations granted for a schema are not automatically propagated to all existing objects in a schema. However, if a user creates new object in a schema it inherits the authorizations the respective user has for the schema.
Q7. SQLScript can exploit the specific capabilities of the built-in functions in HANA. With reference to the above, which of the following statements are true?
a. If our data model is a star scheme, it makes sense to model the data as an Attribute view.
b. If our application involves complex joins, it may be appropriate to model the data as an Attribute view.
c. If aggregation is required, it may be appropriate to model the data as an Analytic view.
SQLScript can exploit the specific capabilities of the built-in functions or SQL statements.
For instance, if our data model is a star schema, it makes sense to model the data as an Analytic view.
This allows the HDB to exploit the star schema when computing joins producing much better performance.
Similarly, if the application involves complex joins, it might make sense to model the data as an Attribute view. Again, this conveys additional information on the structure of the data which is exploited by the HDB for computing joins.
Below you can find a proposal how to select amongst the various ways to implement logic.
More Questions? Have a look at: