This tutorial will teach you the basics of SAP HANA. The tutorial is divided into sections such as SAP HANA Basics, SAP HANA-Modeling, Reporting, and SAP. SAP HANA Tutorial in PDF - Learn SAP HANA starting from Overview, In Memory Computing Engine, Studio, Studio Administration View, System Monitor. Here I have collected some good HANA PDF training tutorial materials from their official website. It will take tome to go through all these documents. So you can.
|Language:||English, Spanish, Portuguese|
|Distribution:||Free* [*Registration Required]|
SAP HANA Tutorial for beginners - Learn SAP HANA step by step with real time project scenarios through HANA video tutorials, PDF training material. S/4 HANA is the new Kid in town and we need to learn its nuisances. Tutorials on S/4 HANA ABAP for SAP HANA Interview Questions & Answers. Class Summary SAP HANA is an in-memory computing platform that allows real- time data analysis. This tutorial will give you the insight of SAP.
Defining measures Once you activate view and click on Data Preview, all attributes and measures will be added under the list of Available objects.
There is an option to choose different types of chart and graphs. These are used to perform complex calculations, which are not possible with other type of Views.
How to create a Calculation View? Choose the Package name under which you want to create a Calculation View. When you click on Calculation View, New Window will open. You can use two types of Calculation View: Graphical and SQL Script. It is used to consume other Attribute, Analytic and other Calculation views. Data Category Cube, in this default node, is Aggregation.
You can choose Star join with Cube dimension. Dimension, in this default node is Projection. All Fact tables can be added and can use default nodes in Calculation View. Example The following example shows how we can use Calculation View with Star join: Copy and paste the below script in SQL editor and execute.
Dim Tables: First change both Dim tables to Dimension Calculation View. Create a Calculation View with Star Join. In Graphical pane, add 2 Projections for 2 Fact tables. Add both fact tables to both Projections and add attributes of these Projections to Output pane. Add parameters of Fact Join to output pane. Choose parameters in Output pane and active the View. Star Join Once view is activated successfully, right click on view name and click on Data Preview. Add attributes and measures to values and labels axis and do the analysis.
Benefits of using Star Join It simplifies the design process. You need not to create Analytical views and Attribute Views and directly Fact tables can be used as Projections. Create Projections of both Analytical Views and Join them. Add attributes of this Join to output pane. Now Join to Projection and add output again. Activate the view successful and go to Data preview for analysis. You can assign different types of right to different users on different component of a View in Analytic Privileges.
Sometimes, it is required that data in the same view should not be accessible to other users who do not have any relevant requirement for that data. Now if you do not want your Report developer to see Salary details or Emp logon details of all employees, you can hide this by using Analytic privileges option. We cannot add measures to restrict access in Analytic privileges.
New window will open. There is also an option to copy an existing Analytic Privilege package. Once you click on Add button, it will show you all the views under Content tab. Selected View will be added under reference models.
Now to add attributes from selected view under Analytic Privilege, click on add button with Associated Attributes Restrictions window. Add objects you want to add to Analytic privileges from select object option and click on OK. In Assign Restriction option, it allows you to add values you want to hide in Modeling View from specific user. Status message — completed successfully confirms activation successfully under job log and we can use this view now by adding to a role.
That view will be added to user role under Analytic Privileges. To delete Analytic Privileges from specific user, select view under tab and use Red delete option. Use Deploy arrow mark at top or F8 to apply this to user profile.
It allows you to import data from workbook format. A business user, who does not have any technical knowledge, uses Information Composer.
It provides simple functionalities with easy to use interface. Information Composer helps to extract data, clean data, preview data and automate the process of creation of physical table in the HANA database. How to upload data using Information Composer? It allows us to upload large amount of data up to 5 million cells. Link to access Information Composer- http: You can perform data loading or manipulation using this tool.
One can find details of tables created using IC under these tables. Using Clipboard Another way to upload data in IC is by use of the clipboard. Copy the data to clipboard and upload it with help of Information Composer. Information Composer also allows you to see preview of data or even provide summary of data in temporary storage. It has inbuilt capability of data cleansing that is used to remove any inconsistency in data.
Once data is cleansed, you need to classify data whether it is attributed. IC has inbuilt feature to check the data type of uploaded data. Final step is to publish the data to physical tables in HANA database. User Roles for using data published with Information Composer Two set of users can be defined to use data published from IC. This role does not allow the user to upload or create any information views using IC. Client Requirements: You do not need to recreate all tables and information models as you can simply export it to new system or import to an existing target system to reduce the effort.
This option can be accessed from File menu at the top or by right clicking on any table or Information model in HANA studio. Users can use this option to export all the packages that make a delivery unit and the relevant objects contained in it to a HANA Server or to local Client location. The user should create Delivery Unit prior to using it. You can see list of all packages assigned to Delivery unit.
This will export the selected Delivery Unit to the specified location. Developer Mode This option can be used to export individual objects to a location in the local system. User can select single Information view or group of Views and Packages and select the local Client location for export and Finish. This is shown in the snapshot below. This can be used when requested.
User creates an Information View, which throws an error and he is not able to resolve. In that case, he can use this option to export the view along with data and share it with SAP for debugging purpose. To export the landscape from one system to other.
This option can be used to export tables along with its content. Data from Local File This is used to import data from a flat file like. It also gives an option if you want to keep the header row. It also gives an option to create a new table under existing Schema or if you want to import data from a file to an existing table. You can do the data preview and can check data definition of the table and it will be same as that of. You can choose from a server or local client. The user need not trigger the activation manually for the imported views.
Click Finish and once completed successfully, it will be populated to target system. Developer Mode Browse for the Local Client location where the views are exported and select the views to be imported, the user can select individual Views or group of Views and Packages and Click on Finish. Configure the System for Mass Import and click Finish.
Click Finish after that. These reporting tools enable Business Managers, Analysts, Sales Managers and senior management employees to analyze the historic information to create business scenarios and to decide business strategy of the company. This generates the need for consuming HANA Modeling views by different reporting tools and to generate reports and dashboards, which are easy to understand for end users.
WebI uses a semantic layer called Universe to connect to data source and these Universes are used for reporting in tool. IDT supports multisource enabled data source. However, UDT only supports Single source. Main tools that are used for designing interactive dashboards- Design Studio and Dashboard Designer.
HANA views can be directly consumed in Lumira for visualization and creating stories. Login to CMC with the user name and password. It will also show already created connections in CMC. To create a new connection, go to green icon and click on this. Enter the name of an OLAP connection and description. Click on Connect and choose modeling view by entering user name and password.
Authentication Types: It will not ask user name and password again while using this connection. It will show all measures and dimensions.
There are four tabs inside SAP Lumira: You can see the data and do any custom calculation. You can add Graphs and Charts.
Drag first Visualization then add page then add second visualization. You can also test this connection by clicking on Test Connection option. Next step is to publish this connection to Repository to make it available for use. It will create a new relational connection with. If you use this connection while creating and publishing a Universe, it will not allow you to publish that to repository. Join Dim and Fact tables with primary keys in Dim tables to create a Schema. Now we have to create a new Business layer on the data foundation that will be consumed by BI Application tools.
Right Click on. All Objects will be added to Query Panel. You can choose attributes and measures from left pane and add them to Result Objects. It will give you the list of all packages in drop down list that are available in HANA system.
You can choose different attributes and measures to report as shown and you can choose different charts like pie charts and bar charts from design option at the top. SAP HANA enables customer to implement different security policies and procedures and to meet compliance requirements of the company. HANA system can also contain more than one multitenant database containers. A multiple container system always has exactly one system database and any number of multitenant database containers.
SAP HANA provides all security related features such as Authentication, Authorization, Encryption and Auditing, and some add on features, which are not supported in other multitenant databases.
Every user wants to work with HANA database must have a database user with necessary privileges. User accessing HANA system can either be a technical user or an end user depending on the access requirement. Executing that operation depends on privileges that user has been granted. User Types User types vary according to security policies and different privileges assigned on user profile. User type can be a technical database user or end user needs access on HANA system for reporting purpose or for data manipulation.
Standard Users Standard users are users who can create objects in their own Schemas and have read access in system Information models.
When these users are created, they do not have any access initially.
If we compare restricted users with Standard users: Most common activities include: You will see security tab in System view: When you expand security tab, it gives option of User and Roles. To create a new user right click on User and go to New User. New window will open where you define User and User parameters. Enter User name mandate and in Authentication field enter password.
Password is applied, while saving password for a new user. You can also choose to create a restricted user. The specified role name must not be identical to the name of an existing user or role. The password rules include a minimal password length and a definition of which character types lower, upper, digit, special characters have to be part of the password. Users in the database can be authenticated by varying mechanisms: Internal authentication mechanism using a password.
A user can be authenticated by more than one mechanism at a time. However, only one password and one principal name for Kerberos can be valid at any one time. One authentication mechanism has to be specified to allow the user to connect and work with the database instance. It also gives an option to define validity of user, you can mention validity interval by selecting the dates.
Validity specification is an optional user parameter. Once this is done, the next step is to define privileges for user profile. There are different types of privileges that can be added to a user profile.
HANA roles to user profile or to add custom roles created under Roles tab. Custom roles allow you to define roles as per access requirement and you can add these roles directly to user profile. This removes need to remember and add objects to a user profile every time for different access types. This is Generic role and is assigned to all database users by default. This role contains read only access to system views and execute privileges for some procedures. These roles cannot be revoked.
System Privileges There are different types of System privileges that can be added to a user profile. It also contains the repository privileges to work with imported objects. Given below are common supported System Privileges: Attach Debugger It authorizes the debugging of a procedure call, called by a different user. Audit Admin Controls the execution of the following auditing-related commands: Catalog Read It authorizes users to have unfiltered read-only access to all system views.
Normally, the content of these views is filtered based on the privileges of the accessing user. By default, each user owns one schema, with this privilege the user is allowed to create additional schemas. Only the owner of an Analytical Privilege can further grant or revoke that privilege to other users or roles. Credential Admin It authorizes the credential commands: Data Admin It authorizes reading all data in the system views. It also enables execution of any Data Definition Language DDL commands in the SAP HANA database A user having this privilege cannot select or change data stored tables for which they do not have access privileges, but they can drop tables or modify table definitions.
Inifile Admin It authorizes changing of system settings. Resource Admin This privilege authorizes commands concerning system resources. It also authorizes many of the commands available in the Management Console. Please check documentation concerning activated objects. These privileges use the component-name as first identifier of the system privilege and the component- privilege-name as the second identifier.
These privileges are used to allow access on objects like Select, Insert, Update and Delete of tables, Views or Schemas. Given below are common supported Object Privileges: There are multiple database objects in HANA database, so not all the privileges are applicable to all kinds of database objects.
Analytic Privileges Sometimes, it is required that data in the same view should not be accessible to other users who does not have any relevant requirement for that data. We can apply row and column level security in Analytic Privileges. Analytic Privileges are used for: Package privileges are used to allow access to data models- Analytic or Calculation views or on to Repository objects.
All privileges that are assigned to a repository package are assigned to all sub packages too. You can also mention if assigned user authorizations can be passed to other users.
Steps to add a package privileges to User profile: Use Ctrl key to select multiple packages. Authorization to modify objects in packages. This can be assigned to an individual user or to the group of users. Application Privileges for Users and User Roles To define Application specific privileges in a user profile or to add group of users, below privileges should be used: SAP HANA system supports various types of authentication method and all these login methods are configured at time of profile creation.
Password should be as per password policy i. Password length, complexity, lower and upper case letters, etc. Please note that password policy cannot be deactivated. It is required to map external login to internal database user. SAML is used only for authentication purpose and not for authorization. User in trusted certificate should exist in HANA system as there is no support for user mapping.
SSO can be configured on below configuration methods: Privileges granted to a user are determined by Object privileges assigned on user profile or role that has been granted to user. Authorization is a combination of both accesses. When a user tries to perform some operation on HANA database, system performs an authorization check. When all required privileges are found, system stops this check and grants the requested access.
System Privileges They are applicable to system and database authorization for users and control system activities. They are used for administrative tasks such as creating Schemas, data backups, creating users and roles and so on. System privileges are also used to perform Repository operations.
Object Privileges They are applicable to database operations and apply to database objects like tables, Schemas, etc. They are used to manage database objects such as tables and views. Different actions like Select, Execute, Alter, Drop, Delete can be defined based on database objects. They are used to control modeling views that are created inside packages like Attribute View, Analytic View, and Calculation View.
They apply row and column level security to attributes that are defined in modeling views in HANA packages. Package Privileges They are applicable to allow access to and ability to use packages that are created in repository of HANA database. This user should be authorized externally for the objects on which repository objects are modeled in HANA system.
These keys are valid only for 90 days and you should request permanent license keys from SAP market place before expiry of this 90 days period after installation. License keys specify amount of memory licensed to target HANA installation. When a permanent License key is expired, a temporary license key is issued, which is valid for only 28 days. During this period, you have to install a permanent License key again.
If this situation occurs, HANA system has to be restarted or a new license key should be requested and installed. All Licenses tab under License tells about Product name, description, Hardware key, First installation time, etc. Audit Policy defines what activities have been performed in HANA system and who has performed those activities at what time. When an action is performed, the policy triggers an audit event to write to audit trail. You can also delete audit entries in Audit trail.
In a distributed environment, where you have multiple database, Audit policy can be enabled on each individual system. For the system database, audit policy is defined in nameserver. Activating an Audit Policy: Audit Admin. You can also choose Audit trail targets. The following audit trail targets are possible: Logging system of Linux Operating System. Internal database table, user who has Audit admin or Audit operator system privilege he can only run select operation on this table.
This type of audit trail is only used for test purpose in a non-production environment. Enter Policy name and actions to be audited. Save the new policy using the Deploy button.
A new policy is enabled automatically, when an action condition is met, an audit entry is created in Audit trail table. You can disable a policy by changing status to disable or you can also delete the policy. System replication can be set up on the console via command line or by using HANA studio. The primary ECC or transaction systems can stay online during this process. We have three types of data replication methods in HANA system: It has no measureable performance impact in source system.
When it is done, it would mean that when you are logged onto AA1 and your user has enough authorization in BB1, you can use the RFC connection and logon to BB1 without having to re-enter user and password. Enter Target host: Click on the Save option at the top. Click on Test Connection and it will successfully test the connection.
It enables to read the business data at Application layer. You need to define data flows in Data Services, scheduling a replication job and defining source and target system in data store in Data Services designer. If data is displayed, Data store connection is fine. Now, to choose target system as HANA database, create a new data store. You can add table if you want to move data from source table to some specific table in HANA database.
Note that target table should be of similar datatype as source table. This is manual execution of a batch job. Login to Data Services Management Console. The R3 load on source system exports data for selected tables in source system and transfer this data to R3 load components in HANA system. SAP Host agent manages the authentication between the source system and target system, which is part of the source system.
The Sybase Replication agent detects any data changes at time of initial load and ensures every single change is completed. When there is a change, update, and delete in entries of a table in source system, a table log is created. This table log moves data from source system to HANA database. Delta Replication after Initial Load The delta replication captures the data changes in source system in real time once the initial load and replication is completed.
All further changes in source system are captured and replicated from source system to HANA database using above-mentioned method. It is a batch-driven data replication technique. It is considered as method for extraction, transformation, and load with limited capabilities for data extraction. DXC is a batch driven process and data extraction using DXC at certain interval is enough in many cases.
You can set an interval when batch job executes example: Input Parameters: Parameters List: Replicate the metadata using specified application component data source version Need to 7. Stores information about all data sources related to DXC.
Both languages can be used: MDX supports multidimensional data model and support reporting and Analysis requirement. Existing physical tables and schemas presents the data foundation for Information models. When these statements are executed, they are parsed by MDX interface and a calculation model is generated for each MDX statement. This calculation model creates an execution plan that generates standard results for MDX.
These results are directly consumed by OLAP clients. You can download this client tool from SAP market place. Alert monitoring is used to handle critical alerts like CPU usage, disk full, FS reaching threshold, etc.
It raises an alert when any of the component breaches the set threshold value. The priority of alert raised in HANA system tells the criticality of problem and it depends on the check that is performed on the component. System monitor is used to check all key component and services of a HANA system. You can also drill down into details of an individual system in Administration Editor.
It also tells about the time when an alert is raised, description of the alert, priority of the alert, etc. It ensures that database can be restored to the most recent committed state after a restart or after a system crash and transactions are executed completely or completely undone.
There are services in HANA system that has their own persistence. It also provides save points and logs for all the database transactions from the last save point. Data and Transaction Log Volumes Database can always be restored to its most recent state, to ensure these changes to data in the database are regularly copied to disk. Log files containing data changes and certain transaction events are also saved regularly to disk.
Data and logs of a system are stored in Log volumes. This information is stored in data pages, which are called Blocks. These blocks are written to data volumes at regular time interval, which is known as save point.
Log volumes store the information about data changes. Changes that are made between two log points are written to Log volumes and called log entries. They are saved to log buffer when transaction is committed. These regular intervals are called savepoints and by default they are set to occur every five minutes.
During this operation changed data is written to disk and redo logs are also saved to disk as well. The data belonging to a Savepoint tells consistent state of the data on disk and remains there until the next savepoint operation has completed. Redo log entries are written to the log volumes for all changes to persistent data.
In the event of a database restart, data from the last completed savepoint can be read from the data volumes, and redo log entries written to the log volumes. Savepoints can be initiated by other operations like database shut down or system restart.
You can also run savepoint by executing the below command: During the HANA system installation, following default directories are created as the storage location for data and log volumes: During a savepoint operation, transactions continue to run as normal. With HANA system running on proper hardware, impact of savepoints on the performance of system is negligible.
Overview Tab It tells the status of currently running data backup and last successful data backup. Backup now option can be used to run data backup wizard. Configuration Tab It tells about the Backup interval settings, file based data backup settings and log based data backup setting. Backup Interval Settings Backint settings give an option to use third party tool for data and log back up with configuration of backing agent.
Configure the connection to a third-party backup tool by specifying a parameter file for the Backint agent. You can change your backup folder. You can also limit the size of data backup files. If system data backup exceeds this set file size, it will split across the multiple files. Log backup settings tell the destination folder where you want to save log backup on external server. You can choose a destination type for log backup File: You can choose backup interval from drop down.
It tells the longest amount of time that can pass before a new log backup is written. Backup Interval: It can be in seconds, minutes or hours. Enable Automatic log backup option: It helps you to keep log area vacant. If you disable this log area will continue to fill and that can result database to hang.
Open Backup Wizard: Backup wizard is used to specify backup settings. It tells the Backup type, destination Type, Backup Destination folder, Backup prefix, size of backup, etc. Hence, during recovery, end users or SAP applications cannot access the database.
How to recover a HANA system? Used for recovering the database to the time as close as possible to the current time. For this recovery, the data backup and log backup have to be available since last data backup and log area are required to perform the above type recovery. Point in Time: Used for recovering the database to the specific point in time. For this recovery, the data backup and log backup have to be available since last data backup and log area are required to perform the above type recovery Specific Data Backup: Used for recovering the database to a specified data backup.
Specific data backup is required for the above type of recovery option. Specific Log Position: This recovery type is an advanced option that can be used in exceptional cases where a previous recovery failed. To run recovery wizard you should have administrator privileges on HANA system. High availability in HANA system defines set of practices that helps to achieve business continuity in case of disaster like power failures in data centers, natural disasters like fire, flood, etc.
Currently, HANA does not easily support non-SAP analytic or transactional applications without significant application re-architecting. What does HANA cost and how large can it scale? HANA is not capable of storing petabyte-levels of data.
However, due to its advanced compression capabilities, HANA deployments can store tens of terabytes of data or more, which is considered large data volumes in most current SAP customer environments. What is the HANA value proposition to customers? Most struggle to make use of the data while spending large sums to store and protect it. One option to make use of this data is to extract, transform, and load subsets into a traditional enterprise data warehouse for analysis.
This process is time-consuming and requires significant investment in related proprietary hardware. The result is often an expensive, bloated EDW that provides little more than backward-looking views of company data. Its data replication and integration capabilities vastly speed up the process of loading data into the database.
And because it uses in-memory storage, applications on top of HANA can access data in near-real time, meaning end-users can gain meaningful insight while there is still time to take meaningful action.
HANA can also perform predictive analytics to help organizations plan for future market developments. How is it different From Competing Offerings from Oracle? Oracle unveiled an in-memory analytic appliance of its own, called Exalytics, at Oracle OpenWorld in October Among the important differences compared to SAP HANA, Exalytics is designed to run on Sun-only hardware, it is a mash-up of various existing Oracle technologies, and there are few, if any, systems in production.
As with all Oracle technologies, the risk of vendor lock-in is high, and the cost is significantly higher than comparable HANA deployments.