Sunday, 24 December 2017

Checking recoverability with hdbbackupdiag --check in SAP HANA

Tags


Symptom
                To ensure recoverability of the SAP HANA database, you want to check if the required data and log backups are present and undamaged.

Reason and Prerequisites

  • You use an SAP HANA database Revision 64 or higher.
  • To protect business operations after system failures, database backups are created manually or automatically at regular intervals. However, recoverability of the database can only be ensured if all necessary backups are present and undamaged during the recovery process. Before starting a recovery, you can, therefore, check whether all backups are available and undamaged.

Solution
By calling hdbbackupdiag with the --check option, you can verify if the available backups are suitable for recovering the HANA database at a particular time. The backups used may be present in any directory in the file system or in an external backup tool.

The hdbbackupdiag tool determines which backups are required to comply with the required point in time and checks whether these backups are available and accessible. It then executes the following checks:

For backups that were written to the file system:

  • - the file is contained in the file system either at the location to which it was written or at a location specified by a search path
  • - the current operating system user has read authorization for the file
  • - the file is large enough to accommodate the usable load stored at the start of the file
  • - the file's backup ID corresponds to the backup ID specified in the backup catalog
For backups written to an external backup tool:

  • - the backup is contained in an external backup tool


Note the following: The hdbbackupdiag tool does not check the content of the backups for consistency. To check individual backups for consistency, you can use the program hdbbackupcheck


Call hdbbackupdiag --check

hdbbackupdiag --check [options]

Options:

-d : Specify the directory in which you are searching for the backup catalog. If this option is not available, a search is carried out for the latest version of the backup catalog in the current directory and, if necessary, in the directories specified with --logDirs and in the external

backup tool.
-c : Specify a file name for the backup catalog

-i : Specify a backup ID for a data backup: this ID should be used as the basis for recovery. If this option is not available, you can use the most recent suitable data backup.

-u "YYYY-MM-DD HH:MM:SS": Specify a time in UTC as the recovery target. If this option is not available, the most recent possible point in time is used.

--dataDir : Specify a directory in which to execute a search for data backup files. If this option is not available, a search is only carried out for the data backup files in the paths noted in the backup catalog.

--logDirs : Specify a comma-separated list of directories where a search is executed for log backup files. If this option is not available, a search is only carried out for the log backup files
in the paths noted in the backup catalog.

--useBackintForCatalog: If this parameter is specified, a search is also carried out in the third-party backup tool for the most recent version of the backup catalog.

--backintDataParamFile :  Specify a parameter file for accessing data backups via an external backup tool

--backintLogParamFile :  Specify a parameter file for accessing log backups via an external backup tool If this parameter is not specified, the parameter file used for accessing the data backups is also used here.

Bear in mind that all directories must be specified as absolute paths.
This restriction does not apply to file names like, for example, in the case of the --backintDataParamFile option.


hdbbackupdiag --check: mode of operation

The program searches for the most recent version of the backup catalog in the directories specified in the command line and also, if necessary, in the external backup tool. From the backup catalog found, the system determines which data

and log backups are required for a recovery, taking into account the specified options. The above-mentioned checks are executed for these backups.

The program indicates which backup catalog is used, which backups are required for the necessary recovery and for which backups the checks were either successful or failed.
If the checks for all backups required for the recovery were successfully completed, the return value of the program is 0; otherwise, the value 1 is returned.


For example:

All backups are written to the default paths in the file system and are still available there:

> hdbbackupdiag --check -d $DIR_INSTANCE/backup/log
using newest backup catalog /usr/sap/IW2/HDB02/backup/log/log_backup_0_0_0_0.1371126898275
using backup catalog 1371126898275 from file /usr/sap/IW2/HDB02/backup/log/log_backup_0_0_0_0.1371126898275
Backup '/usr/sap/IW2/HDB02/backup/data/COMPLETE_DATA_BACKUP_databackup_0_1' successfully checked.
Backup '/usr/sap/IW2/HDB02/backup/data/COMPLETE_DATA_BACKUP_databackup_1_1' successfully checked.
Backup '/usr/sap/IW2/HDB02/backup/data/COMPLETE_DATA_BACKUP_databackup_2_1' successfully checked.
Backup '/usr/sap/IW2/HDB02/backup/data/COMPLETE_DATA_BACKUP_databackup_3_1' successfully checked.
Backup '/usr/sap/IW2/HDB02/backup/data/COMPLETE_DATA_BACKUP_databackup_4_1' successfully checked.
Backup '/usr/sap/IW2/HDB02/backup/log/log_backup_1_0_481920_492928' successfully checked.
Backup '/usr/sap/IW2/HDB02/backup/log/log_backup_2_0_524224_641600' successfully checked.
Backup '/usr/sap/IW2/HDB02/backup/log/log_backup_3_0_709376_710272' successfully checked.
Backup '/usr/sap/IW2/HDB02/backup/log/log_backup_4_0_1235008_1369728' successfully checked.
> echo $?
0


More Details Refer Snote :1873247

Sunday, 3 December 2017

How to Add text in SAP logon screen



Add text to the SAP login screen

Here are the details:-

Go to Transaction SE61 and choose the document 'General text', and create a text with the name ZLOGIN_SCREEN_INFO

Note that there is space for 16 lines with 45 fixed-font characters each or for approximately 60 proportional space font characters on the login screen.

Title lines (can be recognized by means of format keys starting with a 'U') are highlighted in the display.

You may also output icons at the beginning of lines by using an icon code (for example, @1D@ for the STOP icon). You can get a list of icon codes from Report RSTXICON. Pay attention to the codes with two '@' symbols displayed by the report. You cannot include text symbols. The 'include indicator' cannot be used for this function. SUB-HINT here.


Wednesday, 29 November 2017

How to register SLES 12 using the SUSEConnect command line

Tags


How to register SLES 12 using the SUSEConnect command Line.

Environment

  • SUSE Linux Enterprise Server 12
  • SUSE Linux Enterprise Server 12 Service Pack 1 (SLES 12 SP1)
  • SUSE Linux Enterprise Server 12 Service Pack 2 (SLES 12 SP2)
Solution:

SUSEConnect -r YourActivationCode  -e YourEmailAddress  --debug


To get an overview of the existing options for SUSEConnect, run "SUSEConnect --help":
SLES12:~ # SUSEConnect -help
Usage: SUSEConnect [options]

Register SUSE Linux Enterprise installations with the SUSE Customer Center.
Registration allows access to software repositories including updates,
and allows online management of subscriptions and organizations

Manage subscriptions at https://scc.suse.com
    -p, --product [PRODUCT]  Activate PRODUCT. Defaults to the base SUSE Linux
                                           Enterprise product on this system.
                                           Product identifiers can be obtained with 'zypper products'
                                           Format: //
    -r, --regcode [REGCODE]  Subscription registration code for the
                                          product to be registered.
                                          Relates that product to the specified subscription,
                                         and enables software repositories for that product
        --instance-data  [path to file]
                                          Path to the XML file holding the public key and instance data
                                          for cloud registration with SMT
    -e, --email       email address for product registration
         --url [URL]             URL of registration server (e.g. https://scc.suse.com).
                                         Implies --write-config so that subsequent invocations use the same registration server.
    -s, --status                  get current system registration status in json format
         --status-text          get current system registration status in text format
         --write-config       write options to config file at /etc/SUSEConnect

Friday, 17 November 2017

How to switch operation mode in HANA system replication environment

Tags

How to switch operation mode in HANA system replication environment

Symptom :

In HANA system replication environment, you want to switch operation mode.

Environment

SAP HANA 1.0

solution :
Change operation mode from delta_datashipping to logreplay

1. Stop secondary system.

2. Execute the following command on secondary site.

hdbnsutil -sr_register --remoteHost= --remoteInstance= --replicationMode= --operationMode=logreplay --name=SITEB

3. Start secondary system.

Change operation mode from logreplay to delta_datashipping

1. Stop secondary system.

2. Execute the following command on secondary site.

hdbnsutil -sr_register --force_full_replica --remoteHost= --remoteInstance= --replicationMode= --operationMode=delta_datashipping --name=SITEB

3. Start secondary system.

NOTE: Switching operation modes from logreplay to delta_datashipping requires a full data shipping.

More Details Refer : KBA 2500677 

Monday, 13 November 2017

SAP HANA 2.0 SP02 Features Access restrictions in Active/Active (read enabled) system setup

Tags

New feature in SAP HANA 2.0 SP 2 Replication state both side are ACTIVE and ACTIVE.

SAP HANA 2.0 SP02 Features Access restrictions in Active/Active (read enabled) system setup



Symptom:

With the SAP HANA 2.0 SPS00 release the secondary instance in an Active/Active (read enabled) system replication setup provides a read-only service. With SAP HANA 2.0 SPS00 and SAP HANA 2.0 SPS01, you will encounter "feature not supported" errors when you try to create/access the following table types on the secondary. Even if you directly connect to the secondary, these restrictions apply.

  • row store
  • row store tables
  • row store global temporary tables
  • row store no-logging tables
  • column store
  • column store global temporary tables
  • column store no-logging tablesaa
  • Since SAP HANA 2.0 SPS02 release most of these restrictions are gone. As of this version you can access all the above mentioned table type on the Active/Active (read enabled) secondary system, except:


column store no-logging tables

However, row store tables can only be accessed via direct connection to the secondary system; hint-based statement routing from the primary to access row store tables on the secondary will be possible in future releases.

Prerequisite:

Running SAP HANA 2.0 system with Active/Active (read enabled) system setup

Solution
Access to column store no-logging tables will be enabled in future SP.


More Details refer Snote:2391079





Wednesday, 8 November 2017

Error in SAP Instance '/' of SAP system is in an inconsistent state: the processes do not seem to have been started within the instance


Instance '/' of SAP system is in an inconsistent state: the processes do not seem to have been started in the instance

Symptom:

During the execution of Software Provisioning Manager tool (SWPM), the tool fails with the following error:

sapinst_dev.log
ERROR     

Instance '/' of SAP system is in an inconsistent state: the processes do not seem to have been started within the instance.

Resolution:

1. Stop the SWPM tool;

2. Stop the instance (in case it is not stopped already);

3. Make sure sapstartsrv command is not running for the instance. It can be done by running the command:

Example: 

$ ps -ef | grep sapstartsrv
sapadm   4199       1  0 Jan27 ?   00:10:11 /usr/sap/hostctrl/exe/sapstartsrv pf=/usr/sap/hostctrl/exe/host_profile -D

abcadm   5375       1  0 Apr29 ?   00:08:43 /usr/sap/ABC/DVEBMGS00/exe/sapstartsrv pf=/usr/sap/SM1/SYS/profile/START_DVEBMGS00_host -D

abcadm  19285       1  0 Jun15 ?   00:01:19 /usr/sap/ABC/SCS02/exe/sapstartsrv pf=/usr/sap/SM1/SYS/profile/START_SCS02_host -D

abcadm  23002       1  0 May25 ?   00:03:53 /usr/sap/ABC/ASCS01/exe/sapstartsrv pf=/usr/sap/SM1/SYS/profile/START_ASCS01_host -D

abcadm  23005 23706 0 12:52 pts/0  00:00:00 grep sapstartsrv


The first and last lines can be ignored. The first one is the sapstartsrv for the hostctrl, which is not relevant for the issue. The last line is the output for grep command.
Then, there are 3 sapstartsrv running for each instance of the system. Manually stop the process of the instance with issues.

Check if the file /usr/sap/sapservices exists on your system, otherwise create it as described in SAP Note 823941;

Confirm if your system uses start profile or only instance profile;

If it has start profile, then make sure all sapstartsrv commands are pointing to the start profile. In addition, make sure all startup commands showed in the Cause section properly exist into start profile;

If the start profile does not exist on your SAP system, then make sure that all startup commands are properly set for instance profile.

Start the instance once again;

For a test purpose, run the command below. It must return the process below listed.

/usr/sap///exe/sapcontrol -prot NI_HTTP -nr -function GetProcessList

If the output lists the processes status (like example below), it means it has worked and now you can run SWPM tool once again to continue the process.

/usr/sap/ABC/ASCS01/exe/sapcontrol -prot NI_HTTP -nr 01 -function GetProcessList

GetProcessList
OK
name, description, dispstatus, textstatus, starttime, elapsedtime, pid
msg_server, MessageServer, GREEN, Running, 2015 06 04 21:01:45, 482:39:35, 8452
enserver, EnqueueServer, GREEN, Running, 2015 06 04 21:01:46, 482:39:34, 8488

In case the issue remains and you require further assistance from BC-INS experts, provide the following information:

output of command /usr/sap///exe/sapcontrol -prot NI_HTTP -nr -function GetProcessList in a text file;
/usr/sap/sapservices file;
output of command ps -ef | grep sapstartsrv in a text file;
sapinst_dev.log;
control.xml;
summary.html;
keydb.xml
entire /usr/sap//SYS/profile directory;
entire /usr/sap///work directory


OR

Clean the shared memory in instance

Run the Command

cleanipc  remove  

XX your instance number

both DB and APP instance now you can continue the installation.





Friday, 3 November 2017

Error in SUM phase RESTART-SYSTEM-JAVAONLY




Symptom:

Software Update Manager (SUM) throws the below error in the step: RESTART-SYSTEM-JAVAONLY (Logile: \SUM\sdt\log\SUM\RESTART-SYSTEM-JAVAONLY_XX.LOG)

Apr 28, 2017, 2:40:04 AM [Error ]: Return code condition success evaluated to false for process sap control for action wait for start.

Apr 28, 2017 2:40:04 AM [Error ]: The following problem has occurred during step execution: com.sap.sdt.util.diag.DiagException: Could not check status of SAP instance with number 0.
Could not check if the instance number on host is started.
Sapcontrol client could not perform action wait for started on instance
Return code condition success evaluated to false for process sap control for action wait for the start.

WaitForStarted webmethod errors out as shown below: (Logfile : SUM>\SUM\sdt\log\SUM\SAPCONTROL_WAITFORSTARTED_XX_YY.OUT
)

Resolution
To fix the issue, kindly perform the below commands:

Run netstat -n to find the free ports in the range 65000 to 10000
Set the parameter icm/admin_port to any one of the free port (e.g. 60000) in the instance profile of the affected instance.
Restart the sapstartsrv for the instance using the command: sapcontrol -nr -user -function RestartService
Repeat the phase in SUM

OR

Use Latest Version SUM tool.

More details Refer Snote: 2328080

Wednesday, 1 November 2017

SAP HANA STUDIO with Secure storage is locked error

Tags

Symptom:

In HANA STUDIO you see "Secure storage is locked" error in Systems tab.

In error log you can see:
Secure storage was unable to retrieve the master password. If secure
storage was created using a different Windows account, you'll have to
switch back to that account. Alternatively, you can use the password
recovery, or delete and re-create secure storage.

Solution:

There are several possibilities and you can try the following process:

Please go to HANA Studio->Windows -> Preferences -> General -> Security -> Secure Storage ->Password, then please click the recover password button and you will get the question when you created the secure storage.
If you can input correct answer then the secure storage will be unlocked.

If the fisrst step is not working please delete added sytems in your HANA studio, afterwards please 
go to HANA Studio->Windows -> Preferences -> General -> Security -> Secure Storage -> Contents tab.
Then please delete the default secure storage. and restart the HANA Studio.
If the aforementioned is not working, please delete .eclipse folder under your HANA STUDIO workspace and try again.

Sunday, 29 October 2017

SAP HANA Configuring automatic SAP HANA Cleanup with SAP HANACleaner

Tags

Symptom:

You are interested in scheduling regular SAP HANA cleanup activities automatically.

Cause:

Certain SAP HANA cleanup tasks like purging the backup catalog or deleting old trace files (SAP Note 2119087) need to be implemented individually. SAP HANACleaner is now available to perform these tasks automatically

SAP HANACleaner is implemented via Python script.

This script is an expert tool designed by SAP support. You are allowed to use it, but SAP doesn't take over any responsibility for problems originating from the use of this tool.

Resolution:

SAP HANACleaner can be used for the following cleanup tasks:

Task                                                                                                     SAP Note
Cleanup of backup catalog entries                                                              2096851
Cleanup of backups                                                                              1642148
Cleanup of trace files                                                                              2380176
Cleanup of backup.log and backint.log                                                     1642148
Cleanup of audit logs                                                                             2159014
Cleanup of SAP HANA alerts                                                             2147247
Cleanup of free log segments                                                                     2083715
Cleanup of internal events                                                                     2147247
Cleanup of multiple row store containers                                             2222277
Cleanup of data file fragmentation                                                             1870858
Cleanup of SAP HANACleaner logs                                                     2399996
Optimize compression of tables not compressed 2112604
Optimize compression of tables with columns not compressed 2112604
Optimize compression of tables with large UDIV overhead                     2112604

You can install SAP HANACleaner in the following way:

Download the attached script hanacleaner.py

Copy it to a directory on your SAP HANA database server
Attention: Text-based "copy and paste" can result in unforeseen issues, so you should either download the file directly to the database server or make sure that you use a file-based copy approach that doesn't modify the file content.

Once it is installed, you can start it. The following command provides you with an overview of the way how SAP HANACleaner works and the available configuration options:

python hanacleaner.py --help 

When SAP HANACleaner is called without additional options (i.e. "python hanacleaner.py") no actions are performed. You always have to specify specific options that suit your needs.

The following command line options exist to adjust the behavior:

        ----  BACKUP ENTRIES in BACKUP CATALOG (and possibly BACKUPS)  ----                                                      
-be     minimum retained number of data backup (i.e. complete data backups and data snapshots) entries in the catalog, this      
        number of entries of data backups will remain in the backup catalog, all older log backup entries will also be removed   

        with BACKUP CATALOG DELETE BACKUP_ID (see SQL reference for more info) default: -1 (not used)    
                    
-bd     min retained days of data backup (i.e. complete data backups and data snapshots) entries in the catalog [days], the      
       
 youngest successful data backup entry in the backup catalog that is older than this number of days is the oldest         
        successful data backup entry not removed from the backup catalog, default -1 (not used)                                  
        Note: if both -be and -bd is used, the most conservative, i.e. the flag that removes the least number entries, decide
   
        Note: As mentioned in SAP Note 1812057 backup entries made via backint cannot be recovered, i.e. use -be and -bd with care

        if you want to be able to recover from older data backups (it is possible to recover from a specific data backup without the backup catalog)                                                                                                       
-bb     delete backups also [true/false], backups are deleted when the related backup catalog entries are deleted with
           
        BACKUP CATALOG DELETE BACKUP_ID COMPLETE (see SQL reference for more info), default: false
                          
-bo     output catalog [true/false], displays backup catalog before and after the cleanup, default: false                        
-br     output removed catalog entries [true/false], displays backup catalog entries that were removed, default: false   
        
        Note: Please do not use -bo and -br if your catalog is huge (>10000) entries.                                            
        ----  TRACE FILES  ----                                                                                                   
-tc     retention days for trace file content [days], trace file content older than these number of days is removed         
        from (almost) all trace files in all hosts (even currently opened tracefiles), default: -1 (not used)                    
-tf     retention days for trace files [days], trace files, in all hosts, that are older than this number of days are removed   

        (except for the currently opened trace files), only files with certain extensions like .trc, .log etc are taken into     

        account, backup.log and backint.log, are excepted, please see -zb and -zp instead, default: -1 (not used)           
     
-to     output traces [true/false], displays trace files before and after the cleanup, default: false                            
-td     output deleted traces [true/false], displays trace files that were deleted, default: false                               
        ----  DUMP FILES  ----                                                                                                   
-dr     retention days for dump files [days], manually created dump files (a.k.a. fullysytem dumps and runtime dumps) that are older than this number of days are removed, default: -1 (not used)                                                       
        ----  BACKUP LOGS  ----                                                                                                  
-zb     backup logs compression size limit [mb], if there are any backup.log or backint.log file (see -zp below) that is bigger than this size limit, then it is compressed and renamed, default: -1 (not used)                                          
-zp     zip path, specifies the path (and all subdirectories) where to look for the backup.log and backint.log files, 
           
        default is the directory specified by the alias cdtrace    
                                                              
-zl     zip links [true/false], specifies if symbolic links should be followed searching for backup logs in subdirectories       
        of the directory defined by zp (or by alias cdtrace), default: false                                                     
-zo     print zipped backup logs, display the backup.log and backint.log that were zipped, default: false                        
        ----  ALERTS  ----                                                                                                        
-ar     min retained alerts days [days], min age (today not included) of retained statistics server alerts, default: -1 (not used)

-ao     output alerts [true/false], displays statistics server alerts before and after the cleanup, default: false               
-ad     output deleted alerts [true/false], displays statistics server alerts that were deleted, default: false                  
        ----  OBJECT LOCKS ENTRIES with UNKOWN OBJECT NAME  ----                                                                 
-kr     min retained unknown object lock days [days], min age (today not included) of retained object lock entries with unknown  
        object name, in accordance with SAP Note 2147247, default: -1 (not used)                                                 
        ----  OBJECT HISTORY  ----                                                                                                
-om     object history table max size [mb], if the table _SYS_REPO.OBJECT_HISTORY is bigger than this threshold this table will be cleaned up according to SAP Note 2479702, default: -1 (not used)                                                  
-oo     output cleaned memory from object table [true/false], displays how much memory was cleaned up from object history   table, default: false                                                                                                     
        ---- LOG SEGMENTS  ----                                                                                                  
-lr     max free logsegments per service [number logsegments], if more free logsegments exist for a service the statement   
     
        ALTER SYSTEM RECLAIM LOG is executed, default: -1 (not used)                                                             
        ---- EVENTS  ----                                                                                                        
-eh     min retained days for handled events [day], minimum retained days for the handled events, handled events that are older are removed by first being acknowledged and then deleted, this is done for all hosts, default: -1 (not used)
             
-eu     min retained days for unhandled events [day], minimum retained days for events, events that are older are removed by first being handled and acknowledged and then deleted, this is done for all hosts, default: -1 (not used)   
             
        ----  AUDIT LOG  ----                                                                                                     
-ur     retention days for audit log table [days], audit log content older than these number of days is removed, default: -1 (not used) 
                                                                                                  
        ----  DATA VOLUMES FRAGMENTATION  ----                                                                                    
-fl     fragmentation limit [%], maximum fragmentation of data volume files, of any service, before defragmentation of that service is started: ALTER SYSTEM RECLAIM DATAVOLUME ':’ 120 DEFRAGMENT,        default: -1 (not used)  
      
-fo     output fragmentation [true/false], displays data volume statistics before and after defragmentation, default: false     

        ----  MULTIPLE ROW STORE TABLE CONTAINERS   ----                                                                         
-rc     row store containers cleanup [true/false], switch to clean up multiple row store table containers, default: false        
        Note: Unfortunately there is NO nice way to give privileges to the DB User to be allowed to do this. Either you can  
    
        run hanacleaner as SYSTEM user (NOT recommended) or grant DATA ADMIN to the user (NOT recommended)           
             
-ro     output row containers [true/false], displays row store tables with more than one container before cleanup, default: false

        ---- COMPRESSION OPTIMIZATION ----                                                                                        
        1. Both following two flags, -cc, and -ce, must be > 0 to control the force compression optimization on tables that never
        was compression re-optimized (i.e. last_compressed_record_count = 0):                                                    
-cc     max allowed raw main records, if table has more raw main rows --> compress if -ce, default: -1 (not used) e.g. 10000000  
-ce     max allowed estimated size [GB], if estimated size is larger --> compress if -cc, default: -1 (not used) e.g. 1          
        2. All following three flags, -cr, -cs, and -cd, must be > 0 to control the force compression optimization on tables with
        columns with compression type 'DEFAULT' (i.e. no additional compression algorithm in main)                               
-cr     max allowed rows, if a column has more rows --> compress if -cs&-cd, default: -1 (not used) e.g. 10000000                
-cs     max allowed size [MB], if a column is larger --> compress if -cr&-cd, default: -1 (not used) e.g. 500                    
-cd     min allowed distinct count [%], if a column has less distinct quota --> compress if -cr&-cs, default -1 (not used) e.g. 5
        3. Both following two flags, -cu and -cq, must be > 0 to control the force compression optimization on tables whose UDIV 
        quota is too large, i.e. #UDIVs/(#raw main + #raw delta)                                                                 
-cq     max allowed UDIV quota [%], if the table has larger UDIV quota --> compress if -cu, default: -1 (not used) e.g. 150      
-cu     max allowed UDIVs, if a column has more then this number UDIVs --> compress if -cq, default: -1 (not used) e.g. 10000000 

        4. Flag -cb must be > 0 to control the force compression optimization on tables with columns with SPARSE (<122 .02="" a="" and="" block="" index="" nbsp="" or="" p="" prefixed="">
-cb     max allowed rows, if a column has more rows and a BLOCK index and SPARSE (<122 -1="" .02="" 100000="" be="" compression="" default="" e.g.="" nbsp="" not="" or="" p="" prefixed="" re-optimized="" should="" table="" then="" this="" used="">
        Following three flags are general; they control all three, 1., 2., 3., 4., compression optimization possibilities above  
-cp     per partition [true/false], switch to consider flags above per partition instead of per column, default: false           
-cm     merge before compress [true/false], switch to perform a delta merge on the tables before compression, default: false     
-co     output compressed tables [true/false], switch to print all tables that were compression re-optimized, default: false
     
        ---- INTERVALL  ----                                                                                                     
-hci    hana cleaner interval [days], number days that hanacleaner waits before it restarts, default: -1 (exits after 1 cycle)   
        
NOTE: Do NOT use if you run hanacleaner in a cron job!                                                                   
     
  ---- INPUT  ----                                                                                                         
-ff     flag file, full path to a file that contains input flags, each flag in a new line, all lines in the file that does not   
        start with a flag are considered comments, if this flag is used no other flags should be given, default: '' (not used)   
       
  ---- EXECUTE  ----                                                                                                        
-es     execute sql [true/false], execute all crucial housekeeping tasks (useful to turn off for investigation with -os=true),   
        default: true                                                                                                             
       
 ---- OUTPUT  ----                                                                                                        
-os     output sql [true/false], prints all crucial housekeeping tasks (useful for debugging with -es=false), default: false     
-op     output path, full path of the folder for the output logs (if not exists it will be created), default = "" (not used)      
-or     output retention days, logs in the path specified with -op are only saved for this number of days, default: -1 (not used)
-so     standard out switch [true/false], switch to write to standard out, default:  true                                         
        ---- SERVER FULL CHECK ----                                                                                              
-fs     file system, path to server to check for disk full situation before hanacleaner runs, default: blank, i.e. df -h is used 
                     Could also be used to specify a couple of servers with e.g. -fs "|grep sapmnt"                              
-if     ignore filesystems, before hanacleaner starts it checks that there is no disk full situation in any of the filesystems,  
        this flag makes it possible to ignore some filesystems, with comma seperated list, from the -df h command, default: ''   
      
  ----  SSL  ----                                                                                                          
-ssl    turns on ssl certificate [true/false], makes it possible to use SAP HANA Cleaner despite SSL, default: false    
         
        ----  USER KEY  ----                                                                                                     
-k      DB user key, this one has to be maintained in hdbuserstore, i.e. as adm do                                          
        > hdbuserstore SET                      , default: SYSTEMKEY                    
        It could also be a list of comma seperated userkeys (useful in MDC environments), e.g.: SYSTEMKEY,TENANT1KEY,TENANT2KEY  
The following table lists some examples how to call SAP HANACleaner for different purposes:

Command Details
python hanacleaner.py No execution of actions ("hanacleaner needs input arguments")
python hanacleaner.py -be 10 -bd 30 -td true Clean up backup catalog entries and backups that are older than 30 days and that don't be long to the ten newest backups
python hanacleaner.py -tc 42 -tf 42 -ar 42 -bd 42 -zb 50 -eh 2 -eu 42 Clean up statistics server alerts, traces and backup catalog entries older than 42 days, rename and compress backup.log and backint.log when size exceeds 50 MB, handle / acknowledge events after 2 / 42 days

More Details refer Snote:2399996


Thursday, 26 October 2017

SAP HANA multitenant database containers

Tags


Symptom:

SAP HANA multitenant database containers

Solution:

This note is a collection of information about SAP HANA multitenant database containers. It links to further materials.

Introduction:

The feature "SAP HANA multitenant database containers" (or, "MDC") was first introduced starting with SPS09. The concept is based on a single SAP HANA system or database management system (DBMS), with a single system id (SID), which contains at least one tenant database, in addition to a system database. The system database keeps the system-wide landscape information and provides configuration and monitoring system-wide. Users of one tenant database cannot connect to other tenant databases and neither access application data there (unless the system is enabled for cross-database access). The tenant databases are, by default, isolated from each other in regards to application data and user management. Each tenant database can be backed up and recovered independently from one another. Since all tenant database is part of the same SAP HANA DBMS, they all run with the same SAP HANA version (revision number).  In addition, in regards to HA/DR, the defined HA/DR scenario applies to all tenant database.

Focus:

On-Premise Scenarios:
Alternative to MCOS deployments (Multiple components one system), install MDC instead of MCOS to where it fits
Featuring several tenant databases
Address common MCOD scenarios (e.g. ERP-CRM-BW, QA/DEV, Data Marts)
Cloud Scenarios (SAP internal):
SAP HANA Cloud Platform
SAP HANA Enterprise Cloud
Positioning:

Reduces TCO
Enables tenant operation on database level
Offers integrated administration, monitoring, resource management, strong isolation
Offers optimized cross-database operation within the system (read access)
Supports flexible landscape management, cloud scenarios, on-premise scenarios
Combining Applications, Scenarios:

In general, all applications that are supported to run on a single database SAP HANA system are also supported to run on an MDC system, particularly if the application functionality in use is the same as when running on a single database SAP HANA system. However, for statements about a specific application’s support for special features of MDC, such as cross-tenant query functionality, and for general statements about a specific SAP application’s support for SAP HANA MDC, please consult application-specific information regarding viable deployment options.

Note: Some other SAP notes discuss restrictions when combining applications on SAP HANA on a single database (known as MCOD Multiple Components One Database), such as note 1661202 (whitelist of applications/scenarios) and 1826100 (whitelist relevant when running SAP Business Suite on SAP HANA). These restrictions do not apply if each application is deployed on its own tenant database but do apply to deployments inside a given tenant database.

SAP note 1681092 discusses support for more than one SAP HANA DBMS on a single hardware unit, (otherwise known as MCOS, Multiple Components One System).  With MDC, we aim to meet most customer requirements that would lead to consider MCOS, perhaps except cases which require more than one SAP HANA version on the same hardware installation, which is most likely to occur in a non-production system.

Sizing and Implementation Approach (Recommendation):

A pragmatic approach for sizing MDC systems is required. The general recommendation is to perform a sizing exercise for each application or use case and then utilize an additive sizing approach.  When considering BW, Suite-on-HANA, for example, consult notes 1774566, 1825774, 2121768 can be found attached to this note). In the current timeframe, Suite-on-HANA must be deployed on a single node in an MDC System. In determining applications to deploy, a step-by-step approach makes sense. First, install a few applications in different tenants, and proactively monitor resource utilization and performance; based on observations, make determinations about possible additional deployments of applications on other tenant databases in the same system. Implementation considerations: as MDC is a relatively new technology, a conservative approach to implementation is warranted.  A significant amount of stress/volume testing on a project basis is recommended.

Additive sizing: Perform a sizing estimation for each tenant database, utilizing known sizing approaches (e.g. quick sizer, POC, working with a hardware partner, etc) as if it were a single database.  Next, add the individual sizing estimates together and avoid underestimating. In addition to CPU and memory sizing aspects, the I/O throughput aspect should be taken into account. One option to address this is to utilize the SAP HANA HW Configuration Check Tool, to measure if the used storage is able to deliver the required I/O capacity. SAP Note 1943937 has more details.

Architecturally, each tenant database has its own index server, and upon initial deployment, consumes approximately 8 GB (before any application data is added). This means that the actual number of tenant databases that can be deployed is limited by available resources; in other words, a large number of very small tenant databases is not possible given the current MDC architecture.

Separation and Security recommendation

Multitenant database containers are suggested to be used in a trusted environment. It is not recommended, especially with SPS09 to run a hosted environment with several databases from different domains, singular and separate data sources, or customers together in one multitenant container system. With SPS10, an option was added to increase the isolation level between tenant databases. If requirements dictate to run tenant databases with different customers, or strict separation of data, SPS10 or higher is recommended.


Migration towards an 'SAP HANA multitenant database containers' system

SAP HANA single database system can be migrated to a multi-tenant database system. This step is irrevocable. The single database is the SAP HANA default configuration, MDC is optional. Upgrading to a higher support package will not change the system mode. If a migration is explicitly launched, the single database will be converted into a tenant database.
During this process, the system database, responsible for the system topology, will be generated. There will be no changes to application/customer data. Additional tenant databases can be added to the system afterward.

Copying databases into tenant databases: 

With HANA 1.0, it is not supported to take a backup of the single database system, and then restore it into a tenant database of an MDC system. Only a backup from an MDC system can be utilized to copy into a tenant of another MDC system.  Here is an example approach that would accomplish this objective:

A sample pattern to move a single database into an MDC tenant would be:

perform a system copy of a single database system to create another system
update this copied system to a revision enabled for MDC (SPS09 rev95 or higher)
migrate this copy to an MDC system. This will result in a system database and one tenant database
take a backup from this tenant database
set up your target tenant database, by creating a new tenant database in a new or an already existing MDC system
restore the tenant backup (from 3.) into this new tenant
Starting with HANA 2.0, a single database backup can be restored directly into a tenant database. Step 4 of the list above would then be the starting point: Take a backup from the database. Step 1-3 become obsolete.

License Management

Starting with HANA 2.0 SPS02 license keys can be installed in individual tenant databases. For more information about the licensing handling in MDC systems, see 'License Keys for the SAP HANA Database' in the 'SAP HANA Administration Guide' or 'SAP HANA Tenant Databases'.

Administration and monitoring tools

Several tools are available for the administration of SAP HANA. While all tools support database-level administration (which is comparable to the administration of a single-container system), system-level administration of tenant databases requires the SAP HANA cockpit. With SPS10 a new catalog for the system database is available in the SAP HANA Cockpit to monitor overall system health and manage all tenant databases. The SAP HANA studio is required for system configuration tasks. For more information about the SAP HANA cockpit and other administration tools, see SAP HANA Administration tools in the SAP HANA Administration Guide.

Starting with HANA 2.0 there will start a transition regarding tooling. Please see note 2396214.

Backup and Recovery

The focus of the SAP HANA B&R concept for MDC is on backing up and restoring the individual tenant databases, including the system database. So, in the end, there will be a set of tenant backups and the system database backup that can be restored individually or be used all together to restore a complete MDC system from scratch:

install MDC software/system, and the system database will implicitly be created
start recovery of system database. Afterwards, the system database tracks all tenants
there have not to be additional create database requests
start recovery tenant by tenant
Using Backing

Using 'Backint' with external backup tools for recovering MDC systems/tenants requires SPS09 revision 94 or higher. This fix will allow using Backint for backing up and recovering exactly the same tenant of an MDC system. It may not be used for restoring it into another tenant in the same or in another MDC system (tenant copy).

As of SAP HANA, 2.0 SPS01 Backint-based backups of a tenant database ca be recovered into a tenant database using a target SID or tenant NAME that is different to the original one. Likewise, Backing-based backups of a single container system can be recovered into a tenant database using a target SID that is different to the original one.

Moving and copying tenant databases

Tenant databases can be moved or copied using the backup and restore capabilities. This needs downtime only for the tenant database affected. The other tenant databases can stay online. Simply perform a backup, and then either create a new tenant database and then restore the backup into this tenant database. Or, restore the copy into an existing tenant database.

With SPS12 a new path for copying and moving tenant databases have been introduced. It utilizes the algorithms of system replication to enable a near zero downtime copy/move for the tenant database, within an MDC system or between MDC systems. This feature is available from the command line interface only yet. NOTE: For the time being this copy/move cannot be used if an HA/DR system replication is active. This is addressed in future planning.

The standard copy/move process of tenant databases requires an initial certificate configuration in order to enable communication between systems.

Inside an MDC system, in non-production set-up or isolated environment, it may be reasonable to allow to proceed without the need for trusted communication. Starting with HANA 2.0 the internal communication of the copy/move processes may now also run unencrypted.

Cross-Database Access

Inside the same SAP HANA system, read-only queries between tenant databases are possible. Database objects like tables and views from one tenant database can be read by users from other tenant databases. By default cross-database access between tenants is inactive. It must be explicitly enabled and a user mapping is needed in the remote tenant database (more information is available the SAP HANA Administration Guide and notes 2196359). As cross-database-access at certain points has restrictions please refer to the documentation for further guidance.

High availability/Disaster recovery

The SAP HANA HA/DR solution chosen applies to the whole SAP HANA instance, thus for all tenant databases including the system database. The HA/DR solution SAP HANA system replication – works with an "all or nothing" approach – all tenant databases are subject to failover to another data center. Newly created tenant databases are integrated into the replication process automatically after they were backed up. The HA/DR solution SAP HANA storage replication is agnostic about the instance content, thus requires no special actions regarding tenant databases.

Load management

When implementing MDC, attention details for workload management is required. The help documentation, in particular, the MDC operations guide, discusses this topic.  Thus, in order to divide up system resources, parameters such as allocation limit and max_concurrency should be set when tenant databases share the same host.

Current restrictions / Not supported (i.e. not working with MDC) in SPS12 and further notice

SAP HANA Smart Data Streaming can be installed on an SAP HANA system that has been configured for multi-tenant database containers only starting with SPS12. Refer to SDS material for further info

SAP HANA Dynamic Tiering is not recommended to be used in production within MDC systems up to SPS12. SAP HANA dynamic tiering only supports low tenant isolation. Each tenant database is associated with a maximum of one dynamic tiering worker/standby pair. Conversely, the same dynamic tiering worker/standby pair cannot be associated with multiple databases. Refer to DT material for further info.

For the time being copy/move tenant database cannot be used if HA/DR system replication is active. This is addressed in future planning. In that case please use backup/recovery for copy/move purpose instead.

Backup and recovery using snapshots is not yet available for MDC.

SAP HANA Application Lifecycle Management support for tenant databases begins with revision 96, see note 2073243.

FullSystemInfoDump is not working on tenants, only from the system database for all tenants.


More details refer Snote: 2096000