Sunday, 29 October 2017

SAP HANA Configuring automatic SAP HANA Cleanup with SAP HANACleaner

Tags

Symptom:

You are interested in scheduling regular SAP HANA cleanup activities automatically.

Cause:

Certain SAP HANA cleanup tasks like purging the backup catalog or deleting old trace files (SAP Note 2119087) need to be implemented individually. SAP HANACleaner is now available to perform these tasks automatically

SAP HANACleaner is implemented via Python script.

This script is an expert tool designed by SAP support. You are allowed to use it, but SAP doesn't take over any responsibility for problems originating from the use of this tool.

Resolution:

SAP HANACleaner can be used for the following cleanup tasks:

Task                                                                                                     SAP Note
Cleanup of backup catalog entries                                                              2096851
Cleanup of backups                                                                              1642148
Cleanup of trace files                                                                              2380176
Cleanup of backup.log and backint.log                                                     1642148
Cleanup of audit logs                                                                             2159014
Cleanup of SAP HANA alerts                                                             2147247
Cleanup of free log segments                                                                     2083715
Cleanup of internal events                                                                     2147247
Cleanup of multiple row store containers                                             2222277
Cleanup of data file fragmentation                                                             1870858
Cleanup of SAP HANACleaner logs                                                     2399996
Optimize compression of tables not compressed 2112604
Optimize compression of tables with columns not compressed 2112604
Optimize compression of tables with large UDIV overhead                     2112604

You can install SAP HANACleaner in the following way:

Download the attached script hanacleaner.py

Copy it to a directory on your SAP HANA database server
Attention: Text-based "copy and paste" can result in unforeseen issues, so you should either download the file directly to the database server or make sure that you use a file-based copy approach that doesn't modify the file content.

Once it is installed, you can start it. The following command provides you with an overview of the way how SAP HANACleaner works and the available configuration options:

python hanacleaner.py --help 

When SAP HANACleaner is called without additional options (i.e. "python hanacleaner.py") no actions are performed. You always have to specify specific options that suit your needs.

The following command line options exist to adjust the behavior:

        ----  BACKUP ENTRIES in BACKUP CATALOG (and possibly BACKUPS)  ----                                                      
-be     minimum retained number of data backup (i.e. complete data backups and data snapshots) entries in the catalog, this      
        number of entries of data backups will remain in the backup catalog, all older log backup entries will also be removed   

        with BACKUP CATALOG DELETE BACKUP_ID (see SQL reference for more info) default: -1 (not used)    
                    
-bd     min retained days of data backup (i.e. complete data backups and data snapshots) entries in the catalog [days], the      
       
 youngest successful data backup entry in the backup catalog that is older than this number of days is the oldest         
        successful data backup entry not removed from the backup catalog, default -1 (not used)                                  
        Note: if both -be and -bd is used, the most conservative, i.e. the flag that removes the least number entries, decide
   
        Note: As mentioned in SAP Note 1812057 backup entries made via backint cannot be recovered, i.e. use -be and -bd with care

        if you want to be able to recover from older data backups (it is possible to recover from a specific data backup without the backup catalog)                                                                                                       
-bb     delete backups also [true/false], backups are deleted when the related backup catalog entries are deleted with
           
        BACKUP CATALOG DELETE BACKUP_ID COMPLETE (see SQL reference for more info), default: false
                          
-bo     output catalog [true/false], displays backup catalog before and after the cleanup, default: false                        
-br     output removed catalog entries [true/false], displays backup catalog entries that were removed, default: false   
        
        Note: Please do not use -bo and -br if your catalog is huge (>10000) entries.                                            
        ----  TRACE FILES  ----                                                                                                   
-tc     retention days for trace file content [days], trace file content older than these number of days is removed         
        from (almost) all trace files in all hosts (even currently opened tracefiles), default: -1 (not used)                    
-tf     retention days for trace files [days], trace files, in all hosts, that are older than this number of days are removed   

        (except for the currently opened trace files), only files with certain extensions like .trc, .log etc are taken into     

        account, backup.log and backint.log, are excepted, please see -zb and -zp instead, default: -1 (not used)           
     
-to     output traces [true/false], displays trace files before and after the cleanup, default: false                            
-td     output deleted traces [true/false], displays trace files that were deleted, default: false                               
        ----  DUMP FILES  ----                                                                                                   
-dr     retention days for dump files [days], manually created dump files (a.k.a. fullysytem dumps and runtime dumps) that are older than this number of days are removed, default: -1 (not used)                                                       
        ----  BACKUP LOGS  ----                                                                                                  
-zb     backup logs compression size limit [mb], if there are any backup.log or backint.log file (see -zp below) that is bigger than this size limit, then it is compressed and renamed, default: -1 (not used)                                          
-zp     zip path, specifies the path (and all subdirectories) where to look for the backup.log and backint.log files, 
           
        default is the directory specified by the alias cdtrace    
                                                              
-zl     zip links [true/false], specifies if symbolic links should be followed searching for backup logs in subdirectories       
        of the directory defined by zp (or by alias cdtrace), default: false                                                     
-zo     print zipped backup logs, display the backup.log and backint.log that were zipped, default: false                        
        ----  ALERTS  ----                                                                                                        
-ar     min retained alerts days [days], min age (today not included) of retained statistics server alerts, default: -1 (not used)

-ao     output alerts [true/false], displays statistics server alerts before and after the cleanup, default: false               
-ad     output deleted alerts [true/false], displays statistics server alerts that were deleted, default: false                  
        ----  OBJECT LOCKS ENTRIES with UNKOWN OBJECT NAME  ----                                                                 
-kr     min retained unknown object lock days [days], min age (today not included) of retained object lock entries with unknown  
        object name, in accordance with SAP Note 2147247, default: -1 (not used)                                                 
        ----  OBJECT HISTORY  ----                                                                                                
-om     object history table max size [mb], if the table _SYS_REPO.OBJECT_HISTORY is bigger than this threshold this table will be cleaned up according to SAP Note 2479702, default: -1 (not used)                                                  
-oo     output cleaned memory from object table [true/false], displays how much memory was cleaned up from object history   table, default: false                                                                                                     
        ---- LOG SEGMENTS  ----                                                                                                  
-lr     max free logsegments per service [number logsegments], if more free logsegments exist for a service the statement   
     
        ALTER SYSTEM RECLAIM LOG is executed, default: -1 (not used)                                                             
        ---- EVENTS  ----                                                                                                        
-eh     min retained days for handled events [day], minimum retained days for the handled events, handled events that are older are removed by first being acknowledged and then deleted, this is done for all hosts, default: -1 (not used)
             
-eu     min retained days for unhandled events [day], minimum retained days for events, events that are older are removed by first being handled and acknowledged and then deleted, this is done for all hosts, default: -1 (not used)   
             
        ----  AUDIT LOG  ----                                                                                                     
-ur     retention days for audit log table [days], audit log content older than these number of days is removed, default: -1 (not used) 
                                                                                                  
        ----  DATA VOLUMES FRAGMENTATION  ----                                                                                    
-fl     fragmentation limit [%], maximum fragmentation of data volume files, of any service, before defragmentation of that service is started: ALTER SYSTEM RECLAIM DATAVOLUME ':’ 120 DEFRAGMENT,        default: -1 (not used)  
      
-fo     output fragmentation [true/false], displays data volume statistics before and after defragmentation, default: false     

        ----  MULTIPLE ROW STORE TABLE CONTAINERS   ----                                                                         
-rc     row store containers cleanup [true/false], switch to clean up multiple row store table containers, default: false        
        Note: Unfortunately there is NO nice way to give privileges to the DB User to be allowed to do this. Either you can  
    
        run hanacleaner as SYSTEM user (NOT recommended) or grant DATA ADMIN to the user (NOT recommended)           
             
-ro     output row containers [true/false], displays row store tables with more than one container before cleanup, default: false

        ---- COMPRESSION OPTIMIZATION ----                                                                                        
        1. Both following two flags, -cc, and -ce, must be > 0 to control the force compression optimization on tables that never
        was compression re-optimized (i.e. last_compressed_record_count = 0):                                                    
-cc     max allowed raw main records, if table has more raw main rows --> compress if -ce, default: -1 (not used) e.g. 10000000  
-ce     max allowed estimated size [GB], if estimated size is larger --> compress if -cc, default: -1 (not used) e.g. 1          
        2. All following three flags, -cr, -cs, and -cd, must be > 0 to control the force compression optimization on tables with
        columns with compression type 'DEFAULT' (i.e. no additional compression algorithm in main)                               
-cr     max allowed rows, if a column has more rows --> compress if -cs&-cd, default: -1 (not used) e.g. 10000000                
-cs     max allowed size [MB], if a column is larger --> compress if -cr&-cd, default: -1 (not used) e.g. 500                    
-cd     min allowed distinct count [%], if a column has less distinct quota --> compress if -cr&-cs, default -1 (not used) e.g. 5
        3. Both following two flags, -cu and -cq, must be > 0 to control the force compression optimization on tables whose UDIV 
        quota is too large, i.e. #UDIVs/(#raw main + #raw delta)                                                                 
-cq     max allowed UDIV quota [%], if the table has larger UDIV quota --> compress if -cu, default: -1 (not used) e.g. 150      
-cu     max allowed UDIVs, if a column has more then this number UDIVs --> compress if -cq, default: -1 (not used) e.g. 10000000 

        4. Flag -cb must be > 0 to control the force compression optimization on tables with columns with SPARSE (<122 .02="" a="" and="" block="" index="" nbsp="" or="" p="" prefixed="">
-cb     max allowed rows, if a column has more rows and a BLOCK index and SPARSE (<122 -1="" .02="" 100000="" be="" compression="" default="" e.g.="" nbsp="" not="" or="" p="" prefixed="" re-optimized="" should="" table="" then="" this="" used="">
        Following three flags are general; they control all three, 1., 2., 3., 4., compression optimization possibilities above  
-cp     per partition [true/false], switch to consider flags above per partition instead of per column, default: false           
-cm     merge before compress [true/false], switch to perform a delta merge on the tables before compression, default: false     
-co     output compressed tables [true/false], switch to print all tables that were compression re-optimized, default: false
     
        ---- INTERVALL  ----                                                                                                     
-hci    hana cleaner interval [days], number days that hanacleaner waits before it restarts, default: -1 (exits after 1 cycle)   
        
NOTE: Do NOT use if you run hanacleaner in a cron job!                                                                   
     
  ---- INPUT  ----                                                                                                         
-ff     flag file, full path to a file that contains input flags, each flag in a new line, all lines in the file that does not   
        start with a flag are considered comments, if this flag is used no other flags should be given, default: '' (not used)   
       
  ---- EXECUTE  ----                                                                                                        
-es     execute sql [true/false], execute all crucial housekeeping tasks (useful to turn off for investigation with -os=true),   
        default: true                                                                                                             
       
 ---- OUTPUT  ----                                                                                                        
-os     output sql [true/false], prints all crucial housekeeping tasks (useful for debugging with -es=false), default: false     
-op     output path, full path of the folder for the output logs (if not exists it will be created), default = "" (not used)      
-or     output retention days, logs in the path specified with -op are only saved for this number of days, default: -1 (not used)
-so     standard out switch [true/false], switch to write to standard out, default:  true                                         
        ---- SERVER FULL CHECK ----                                                                                              
-fs     file system, path to server to check for disk full situation before hanacleaner runs, default: blank, i.e. df -h is used 
                     Could also be used to specify a couple of servers with e.g. -fs "|grep sapmnt"                              
-if     ignore filesystems, before hanacleaner starts it checks that there is no disk full situation in any of the filesystems,  
        this flag makes it possible to ignore some filesystems, with comma seperated list, from the -df h command, default: ''   
      
  ----  SSL  ----                                                                                                          
-ssl    turns on ssl certificate [true/false], makes it possible to use SAP HANA Cleaner despite SSL, default: false    
         
        ----  USER KEY  ----                                                                                                     
-k      DB user key, this one has to be maintained in hdbuserstore, i.e. as adm do                                          
        > hdbuserstore SET                      , default: SYSTEMKEY                    
        It could also be a list of comma seperated userkeys (useful in MDC environments), e.g.: SYSTEMKEY,TENANT1KEY,TENANT2KEY  
The following table lists some examples how to call SAP HANACleaner for different purposes:

Command Details
python hanacleaner.py No execution of actions ("hanacleaner needs input arguments")
python hanacleaner.py -be 10 -bd 30 -td true Clean up backup catalog entries and backups that are older than 30 days and that don't be long to the ten newest backups
python hanacleaner.py -tc 42 -tf 42 -ar 42 -bd 42 -zb 50 -eh 2 -eu 42 Clean up statistics server alerts, traces and backup catalog entries older than 42 days, rename and compress backup.log and backint.log when size exceeds 50 MB, handle / acknowledge events after 2 / 42 days

More Details refer Snote:2399996


Thursday, 26 October 2017

SAP HANA multitenant database containers

Tags


Symptom:

SAP HANA multitenant database containers

Solution:

This note is a collection of information about SAP HANA multitenant database containers. It links to further materials.

Introduction:

The feature "SAP HANA multitenant database containers" (or, "MDC") was first introduced starting with SPS09. The concept is based on a single SAP HANA system or database management system (DBMS), with a single system id (SID), which contains at least one tenant database, in addition to a system database. The system database keeps the system-wide landscape information and provides configuration and monitoring system-wide. Users of one tenant database cannot connect to other tenant databases and neither access application data there (unless the system is enabled for cross-database access). The tenant databases are, by default, isolated from each other in regards to application data and user management. Each tenant database can be backed up and recovered independently from one another. Since all tenant database is part of the same SAP HANA DBMS, they all run with the same SAP HANA version (revision number).  In addition, in regards to HA/DR, the defined HA/DR scenario applies to all tenant database.

Focus:

On-Premise Scenarios:
Alternative to MCOS deployments (Multiple components one system), install MDC instead of MCOS to where it fits
Featuring several tenant databases
Address common MCOD scenarios (e.g. ERP-CRM-BW, QA/DEV, Data Marts)
Cloud Scenarios (SAP internal):
SAP HANA Cloud Platform
SAP HANA Enterprise Cloud
Positioning:

Reduces TCO
Enables tenant operation on database level
Offers integrated administration, monitoring, resource management, strong isolation
Offers optimized cross-database operation within the system (read access)
Supports flexible landscape management, cloud scenarios, on-premise scenarios
Combining Applications, Scenarios:

In general, all applications that are supported to run on a single database SAP HANA system are also supported to run on an MDC system, particularly if the application functionality in use is the same as when running on a single database SAP HANA system. However, for statements about a specific application’s support for special features of MDC, such as cross-tenant query functionality, and for general statements about a specific SAP application’s support for SAP HANA MDC, please consult application-specific information regarding viable deployment options.

Note: Some other SAP notes discuss restrictions when combining applications on SAP HANA on a single database (known as MCOD Multiple Components One Database), such as note 1661202 (whitelist of applications/scenarios) and 1826100 (whitelist relevant when running SAP Business Suite on SAP HANA). These restrictions do not apply if each application is deployed on its own tenant database but do apply to deployments inside a given tenant database.

SAP note 1681092 discusses support for more than one SAP HANA DBMS on a single hardware unit, (otherwise known as MCOS, Multiple Components One System).  With MDC, we aim to meet most customer requirements that would lead to consider MCOS, perhaps except cases which require more than one SAP HANA version on the same hardware installation, which is most likely to occur in a non-production system.

Sizing and Implementation Approach (Recommendation):

A pragmatic approach for sizing MDC systems is required. The general recommendation is to perform a sizing exercise for each application or use case and then utilize an additive sizing approach.  When considering BW, Suite-on-HANA, for example, consult notes 1774566, 1825774, 2121768 can be found attached to this note). In the current timeframe, Suite-on-HANA must be deployed on a single node in an MDC System. In determining applications to deploy, a step-by-step approach makes sense. First, install a few applications in different tenants, and proactively monitor resource utilization and performance; based on observations, make determinations about possible additional deployments of applications on other tenant databases in the same system. Implementation considerations: as MDC is a relatively new technology, a conservative approach to implementation is warranted.  A significant amount of stress/volume testing on a project basis is recommended.

Additive sizing: Perform a sizing estimation for each tenant database, utilizing known sizing approaches (e.g. quick sizer, POC, working with a hardware partner, etc) as if it were a single database.  Next, add the individual sizing estimates together and avoid underestimating. In addition to CPU and memory sizing aspects, the I/O throughput aspect should be taken into account. One option to address this is to utilize the SAP HANA HW Configuration Check Tool, to measure if the used storage is able to deliver the required I/O capacity. SAP Note 1943937 has more details.

Architecturally, each tenant database has its own index server, and upon initial deployment, consumes approximately 8 GB (before any application data is added). This means that the actual number of tenant databases that can be deployed is limited by available resources; in other words, a large number of very small tenant databases is not possible given the current MDC architecture.

Separation and Security recommendation

Multitenant database containers are suggested to be used in a trusted environment. It is not recommended, especially with SPS09 to run a hosted environment with several databases from different domains, singular and separate data sources, or customers together in one multitenant container system. With SPS10, an option was added to increase the isolation level between tenant databases. If requirements dictate to run tenant databases with different customers, or strict separation of data, SPS10 or higher is recommended.


Migration towards an 'SAP HANA multitenant database containers' system

SAP HANA single database system can be migrated to a multi-tenant database system. This step is irrevocable. The single database is the SAP HANA default configuration, MDC is optional. Upgrading to a higher support package will not change the system mode. If a migration is explicitly launched, the single database will be converted into a tenant database.
During this process, the system database, responsible for the system topology, will be generated. There will be no changes to application/customer data. Additional tenant databases can be added to the system afterward.

Copying databases into tenant databases: 

With HANA 1.0, it is not supported to take a backup of the single database system, and then restore it into a tenant database of an MDC system. Only a backup from an MDC system can be utilized to copy into a tenant of another MDC system.  Here is an example approach that would accomplish this objective:

A sample pattern to move a single database into an MDC tenant would be:

perform a system copy of a single database system to create another system
update this copied system to a revision enabled for MDC (SPS09 rev95 or higher)
migrate this copy to an MDC system. This will result in a system database and one tenant database
take a backup from this tenant database
set up your target tenant database, by creating a new tenant database in a new or an already existing MDC system
restore the tenant backup (from 3.) into this new tenant
Starting with HANA 2.0, a single database backup can be restored directly into a tenant database. Step 4 of the list above would then be the starting point: Take a backup from the database. Step 1-3 become obsolete.

License Management

Starting with HANA 2.0 SPS02 license keys can be installed in individual tenant databases. For more information about the licensing handling in MDC systems, see 'License Keys for the SAP HANA Database' in the 'SAP HANA Administration Guide' or 'SAP HANA Tenant Databases'.

Administration and monitoring tools

Several tools are available for the administration of SAP HANA. While all tools support database-level administration (which is comparable to the administration of a single-container system), system-level administration of tenant databases requires the SAP HANA cockpit. With SPS10 a new catalog for the system database is available in the SAP HANA Cockpit to monitor overall system health and manage all tenant databases. The SAP HANA studio is required for system configuration tasks. For more information about the SAP HANA cockpit and other administration tools, see SAP HANA Administration tools in the SAP HANA Administration Guide.

Starting with HANA 2.0 there will start a transition regarding tooling. Please see note 2396214.

Backup and Recovery

The focus of the SAP HANA B&R concept for MDC is on backing up and restoring the individual tenant databases, including the system database. So, in the end, there will be a set of tenant backups and the system database backup that can be restored individually or be used all together to restore a complete MDC system from scratch:

install MDC software/system, and the system database will implicitly be created
start recovery of system database. Afterwards, the system database tracks all tenants
there have not to be additional create database requests
start recovery tenant by tenant
Using Backing

Using 'Backint' with external backup tools for recovering MDC systems/tenants requires SPS09 revision 94 or higher. This fix will allow using Backint for backing up and recovering exactly the same tenant of an MDC system. It may not be used for restoring it into another tenant in the same or in another MDC system (tenant copy).

As of SAP HANA, 2.0 SPS01 Backint-based backups of a tenant database ca be recovered into a tenant database using a target SID or tenant NAME that is different to the original one. Likewise, Backing-based backups of a single container system can be recovered into a tenant database using a target SID that is different to the original one.

Moving and copying tenant databases

Tenant databases can be moved or copied using the backup and restore capabilities. This needs downtime only for the tenant database affected. The other tenant databases can stay online. Simply perform a backup, and then either create a new tenant database and then restore the backup into this tenant database. Or, restore the copy into an existing tenant database.

With SPS12 a new path for copying and moving tenant databases have been introduced. It utilizes the algorithms of system replication to enable a near zero downtime copy/move for the tenant database, within an MDC system or between MDC systems. This feature is available from the command line interface only yet. NOTE: For the time being this copy/move cannot be used if an HA/DR system replication is active. This is addressed in future planning.

The standard copy/move process of tenant databases requires an initial certificate configuration in order to enable communication between systems.

Inside an MDC system, in non-production set-up or isolated environment, it may be reasonable to allow to proceed without the need for trusted communication. Starting with HANA 2.0 the internal communication of the copy/move processes may now also run unencrypted.

Cross-Database Access

Inside the same SAP HANA system, read-only queries between tenant databases are possible. Database objects like tables and views from one tenant database can be read by users from other tenant databases. By default cross-database access between tenants is inactive. It must be explicitly enabled and a user mapping is needed in the remote tenant database (more information is available the SAP HANA Administration Guide and notes 2196359). As cross-database-access at certain points has restrictions please refer to the documentation for further guidance.

High availability/Disaster recovery

The SAP HANA HA/DR solution chosen applies to the whole SAP HANA instance, thus for all tenant databases including the system database. The HA/DR solution SAP HANA system replication – works with an "all or nothing" approach – all tenant databases are subject to failover to another data center. Newly created tenant databases are integrated into the replication process automatically after they were backed up. The HA/DR solution SAP HANA storage replication is agnostic about the instance content, thus requires no special actions regarding tenant databases.

Load management

When implementing MDC, attention details for workload management is required. The help documentation, in particular, the MDC operations guide, discusses this topic.  Thus, in order to divide up system resources, parameters such as allocation limit and max_concurrency should be set when tenant databases share the same host.

Current restrictions / Not supported (i.e. not working with MDC) in SPS12 and further notice

SAP HANA Smart Data Streaming can be installed on an SAP HANA system that has been configured for multi-tenant database containers only starting with SPS12. Refer to SDS material for further info

SAP HANA Dynamic Tiering is not recommended to be used in production within MDC systems up to SPS12. SAP HANA dynamic tiering only supports low tenant isolation. Each tenant database is associated with a maximum of one dynamic tiering worker/standby pair. Conversely, the same dynamic tiering worker/standby pair cannot be associated with multiple databases. Refer to DT material for further info.

For the time being copy/move tenant database cannot be used if HA/DR system replication is active. This is addressed in future planning. In that case please use backup/recovery for copy/move purpose instead.

Backup and recovery using snapshots is not yet available for MDC.

SAP HANA Application Lifecycle Management support for tenant databases begins with revision 96, see note 2073243.

FullSystemInfoDump is not working on tenants, only from the system database for all tenants.


More details refer Snote: 2096000

Saturday, 21 October 2017

Startup of SAP HANA Service With Persistence can Hang

Tags


Symptom:

                The SAP HANA database hangs during startup. In the traces of the SAP HANA service with a persistence that is hanging (e.g. index server or xsengine) you find repeatedly entries similar to:

[0000]{-1}[-1/-1] 0000-00-00 00:00:00.000000 w Service_Startup  Starter.cc : Version collection triggered times. but version(s) still remain(s). Please check if there is a blocker transaction

Reason and Prerequisites Reason:

Due to a programming error in the time synchronization, a deadlock can occur when a SAP HANA service starts up.

Affected Releases:

SAP HANA 2.0 database:
Revisions of SPS00
Revisions <= 012.01 (SPS01)

Prerequisites:

During startup of a service of the SAP HANA database row store versions are present. This especially can happen if SAP HANA service is not terminated gracefully or if a takeover is performed.
These row store versions are being garbage collected.


Solution :
Apply one of the following SAP HANA database Revisions:

SAP HANA 2.0 database:
Revisions >= 012.02 (SPS01)
or higher

Workaround:

To resolve the deadlock you can restart the master indexserver of the affected SAP HANA database. For details how to stop and start a service with SAP HANA Studio please refer to the section "Stop and Start a Database Service" in the SAP HANA Administration Guide.

More Details refer Snote : 2528893

Thursday, 19 October 2017

SAP HANA Crash during database start up

Tags

Symptom:

The indexserver or other database services crash during the database start up with following call stack:

0: ptime::Transaction::Transaction
1: ptime::TraceUtil::getOidsFromObjects
2: ptime::ExpensiveStatementTraceFilter::load
3: ptime::ExpensiveStatementTracerImpl::loadConfig
4: ptime::Config::startup
5: TRexAPI::TREXIndexServer::startup
6: nlsui_main

Reason and Prerequisites
The expensive statement trace is turned on and there is a filter on a specific object defined. During start up the tracer tries to use a transaction but the object has not been initialized yet.

Solution
Solution: The crash is fixed with SAP HANA revision 81.

Workaround: to be able to start up the database the filter on the objects needs to be removed. Therefore, please remove the related entry in the global.ini. The complete section about expensive statement trace in the global.ini looks similar to this:

[expensive_statement]
threshold_duration = 1
user = system
object = sys.m_connections

To solve the situation, the entry "object = ..." needs to be removed.

More details refer Snote: 2018947

Wednesday, 18 October 2017

SAP Hana Log Backup

Tags

SAP Hana Log Backup
                       
                     SAP HANA creates redo log backups automatically at regular intervals By default and in during a log backup, only the actual data of the log segments for each service with persistence is written from the log area to service-specific log backups in the file system or to a third-party backup tool.

SAP HANA automatically continues creating log backups of all the log segments that have not so far been backed up. 

So we can Set the default log backup location in the system.

After a system failure, you may need log backups to recover the database to the desired state. 

Log Backup file format :

(log_backup) __(volume ID)_(log partition ID)_(first redo log position)_(last redo log position).(backup_ID)

Example:
A log backup name could look like this : (log_backup_5_1_876543_76543.8734647847)

Tuesday, 17 October 2017

SAP Kernel Upgrade steps for Windows Environment


Introduction:

Following the procedure for Kernel Upgrade for a SAP system in windows platform.


1. Login to www.support.sap.com  and download the latest Kernel files.


Downloads—->SAP Support Packages—->Support packages & Patches-Entry By application group—->SAP application Components—->SAP ERP—->SAP ERP 6.0—->  Entry By Component—->SAP ECC Server—-> SAP KERNEL 7.40 EXT 64-BIT UC—->Windows Server on X64 64BIT—->Sybase ASE “SAPEXEDB_XXX.SAR”    (Kernel Part II)


Downloads—->SAP Support Package—->Support packages & Patches-Entry By application group—->SAP application Components—->SAP ERP—->SAP ERP 6.0—->Entry By Component—->SAP ECC Server—->SAP SAP KERNEL 7.40 EXT 64-BIT UC—->Windows Server on X64 64BIT—->#Database  Independent “SAPEXE_XXX.SAR” (Kernel Part I).                                       

2. Login to the Server through OS level with SIDadm.

3. Make the copies of the existing folders in the backup folder:-
                   Drive:\usr\sap\\SYS\exe
4. Stop the SAP Instance & Services

  • SAPSID_00
  • SAPSID_01
  • SAPHostControl
  • SAPHostExec

5. Extract the SAR Files 


  •      Go to command prompt
  •      Go to as above path & Extract the files
  •       Drive:\sapcar –xvf   SAPEXEXXX*.sar
  •       Drive:\sapcar -xvf   SAPEXEDBXXX*.sar

6. Copy and paste the uncared files to \usr\sap\SID\SYS\exe\uc\NTAMD64\ .

Hint:

Before that the take the kernel folder backup

7. Select the option copy and Replace if it prompts pop message as “There is already a file with the same name in the location.”

8. Start the SAP Instance & Services

  • SAPSID_00
  • SAPSID_01
  • SAPHostControl
  • SAPHostExec
9.  Check the updated kernel from Command Prompt by giving disp+work..

10.  Check the Kernel version and patch level go to SM51.

Monday, 16 October 2017

SAP Kernel Upgrade steps for SUSE Linux Environment



Steps to Upgrade SAP Kernel  in SUSE Linux Environment

Preparation:
   Go to transaction code SM51, Check Current SAP Kernel version and patch level.

Kernel Dump download:

             Kernel Download, Backup & Transfer Process follow the steps.

1. Download the necessary Kernel Patch from http://support.sap.com
   
     Downloads -> SAP Support Packages and Patchs ->SAP Kernel-> choose particular version like SAP KERNEL 721.

The packages should be similar like this, Database Independent, SAPEXE_XX.SAR – Kernel Part 1 and SAPEXEDB_XX.SAR – Kernel Part II.

2. Create a new directory folder and transfer the two downloaded packages to created directory.

 mkdir /newkernel 

3. Backup the old kernel to other location


cp -pR /sapmnt/SID/exe/* /oldkernel

Hint:


Default location for kernel is in /sapmnt//exe

Kernel Upgrade Process steps

1. Login to the system as adm and go to / directory to uncar the two packages.

  • SAPCAR -xvf SAPEXE_XX.SAR
  • SAPCAR -xvf SAPEXEDB_.SAR

2. Stop the SAP system and saposcol (login adm mode)

  • stopsap R3
  • saposcol -k

Hint :

Before stopping the SAP system, please make sure the logged user, batch jobs are alerted.

3. Login to the system as root and copy all the extracted kernel packages into /sapmnt//exe
  • cd /new_kernel
  • cp -pR * /sapmnt/SID/exe/
4. After all the packages copied, run the following commands
  • cd /sapmnt/SID/exe/
  • /saproot.sh SID
5. Switch user to adm and start the SAP system and saposcol as well.
  • saposcol
  • startsap R3
After Upgrade Check the Kernel Versions:

1. Go to transaction code SM51, click on Release Notes. Check the latest kernel number.








Sunday, 15 October 2017

How to check SAP HANA Database Version

Tags

Check SAP HANA Database Version

There are three ways to check SAP HANA Database Version.

1. Via SAP GUI
2. Via SAP HANA Studio
3. Via SQL Command

Using Command Line:

Log in to DATABASE ADM

example: 
usr/sap/SID/HDB00> HDB version


Using SAP HANA STUDIO

Open SAP HANA Studio and right click added HANA System and open properties 
Get version History.


Using SAP GUI

Login SAP GUI  and click SYSTEM ---> Status get DATABASE Version









Saturday, 14 October 2017

SAP User Security profile parameter



SAP User Security Profile Parameter

Profile parameter
Description
Default value
Recommended value
login/min_password_lng
Minimum password length for user password
3
3
login/password_expiration_time
Number of days between forced password change
0
90
Login/fails_to_session_end
Number of invalid logon attempts allowed before the SAP GUI is disconnected
3
3
Login/fails_to_user_lock
Number of invalid logon attempts within a day before the user id is automatically locked by the system
12
5
rdisp/gui_auto_logout
Time, in seconds, that SAPGUI is automatically disconnected because of in-activity
0
30
Auth/test_mode
Switch to report RSUSR400 for authority check
N
N
Auth/system_access_check_off
Switch off automatic authority check
N
N
Auth/no_check_in_some_cases
Special authorization checks turned off by customer
N
Y
Login/ext_security
Security access controlled by external software
N
N
Auth/rfc_authority_check
Permission for remote function calls from within ABAP programs
0
1
Login/failed_user_auto_unlock
Disable system function for automatic unlock of users at midnight
0
1
Login/no_automatic_user_sapstar
Disable ability to logon as SAP* with PASS of password when SAP* deleted
0
1
Auth/no_check_on_tcode
Disable check of S_TCODE on non-basis transactions
N
N
Auth/auth_number_in_userbuffer
Number of authorizations allowed in the user buffer
800
1000
Auth/authorization_trace
Every trace will be logged once in table USOBX
N
N
Auth/check_value_write_on
Write value for SU53 security checking/authorization failure
Y
Y