Sunday, 23 December 2018

What is Client in SAP

Client
Introduction
 
A client is a self-contained business entity or unit within each SAP system with independent information and data. The main objective of the client is to keep the business data isolated so that other clients cannot access or change them.

The SAP client concept allows an organization to split a system into logical subunits. Clients may operate as separate business units, where all data is stored in a common database. Client specific data includes User Master Records (including authorizations and user groups), data customization and application/business data

• Client-specific data is data affecting only one client, such as user master and application data.
• Cross-client data is data affecting the whole system environment, such as cross-client Customizing data and all Repository objects.

SAP supports up to 1000 clients starting from 000 to 999.

SAP Standard Clients 

SAP R3 comes with 3 inbuilt standard clients,

000 – Master Client aka Reference Client

Client 000 contains a simple organizational structure of a test company and includes parameters for all applications, standard settings, and configurations for the control of standard transactions and
examples to be used in many different profiles of the business applications. It contains clien independent settings.
 

001 – Copy of Client 000
This client is a copy of the 000 client including the test company. This client's settings are  clientindependent if it is configured or customized. People normally use 001 client to create a new client.

066 – EarlyWatch Client (SAP Support)
The SAP EarlyWatch Alert is a diagnosis service, for solution monitoring of SAP and non-SAP systems in the SAP Solution Manager. Alert may contain Performance issue, average response time, current system load, Database administration etc.

Golden Client
Golden Client is the client where all the developments i.e. changes and modifications like configuration settings and cross-client customizations are made and tested, which are later transported to the Quality and Production clients.

A Golden client is configured to automatically record all changes and store in change requests.

SAP Work Process and Types


Work Processes

The SAP work process is a component of the application server that executes an ABAP application.
SAP work processes are started as operating system processes, each with its own process ID (PID).
The majority of the processing of the application is performed by the SAP work processes.

D – Dialog
V – Update
E – Enqueue
B – Background
M – Message Server
S – Spool
G – Gateway Server


Dialogue

It is the only work process which communicates interactively with the users.
There should be at least 2 Dialogue work process per instance. Dialogue work process initiates Update, Background and Spool.The dialog work processes are in charge of the interactive tasks of the R/3 system. A dialog work process performs the dispatcher in the request queues after user input are assigned to the next free work process. The dialog work processes execute just one single dialog step at a time and become immediately free for the next user request (dialog step), which is assigned by the dispatcher. This is called work process multiplexing. This means that the dialog work processes can be constantly switching between different user sessions. This type of processing allows a great deal of resource distribution; otherwise the system would need as many dialog work processes as the number of expected interactive users.  It works exactly the same as multiuser operating systems. Depending on the type of business transactions the users are working on, a dialog work process can support from 5 to more than 10 simultaneous users each. This means that 10 dialog work processescould theoretically support approximately 100 users. However, this is just a rule of thumb. Tuning this parameter means that if users have to wait long to get a free work process, you should increase the parameter.

Update

It is used to update the transactions in the database. It is initiated by Dialogue process. There should be at least 1 Update work process in the entire system. It is also recommended to have an update process for every 5 Dialogue.

Enqueue

It is used to provide locks for the records that are going to be updated. It ensures consistency for updates. There will be only 1 Enqueue configured in the system during the installation. It is possible to have more than one enqueue provided they are installed or configured on the central instance.

Background

The tasks which are expensive or time consuming are scheduled to run in the background mode noninteractively. There should be at least 2 background work process in the system

Message


There should be only 1 message server in me entire R/3 system. It is used to manage all the dispatchers. It is used to load balance the requests to identity the least loaded dispatcher. It is also used to provide locks to the request that are coming from Dialogue instances.
Gateway

It is used to provide a means of communication with SAP and NON-SAP systems. There will be only 1 gateway for each instance.

Spool

It is used to print the documents to a printer or output to a fax machine etc. There should be at least 1 Spool process in the entire system. It is also possible to configure more spool process depending on the print/spool volume.

Tuesday, 4 December 2018

Sybase sybctrl: connect to DB fails error

Tags

Symptom

On Linux / Unix sybctrl cannot connect to database.
The error message is similar to:
Could not load SYB library libsybdrvodb.so. No database connection possible.
Other Terms

sybctrl sybxctrl

 
Reason and Prerequisites

If sybctrl has the s-bit set and runs as a setuid program, LD_LIBRARY_PATH will be ignored by operation system for security reasons.
Shared libraries will not to be found and load of odbc driver will fail.
 
Solution

Use sybxctrl which is a copy of sybctrl without the s-bit set.

sybxctrl will be created if the database is stopped using stopsap script or stopdb script.

Otherwise create sybxctrl by executing as user adm:
 
cd /usr/sap//SYS/exe/run
 

cp sybctrl sybxctrl




More details refere Snote : 2719763

Saturday, 24 November 2018

Apache Web server Configuration for SAP Content Server(DMS).


Symptom

To operate the SAP Content Server for UNIX platforms, you require Apache Web Server 1.3.xx (where xx is Version 22 or higher). This note describes the compiler settings for the relevant ANSI C compilers of the different supported platforms and special situations that may occur when you generate the Apache Web server. To understand this note, you must be familiar with the commands of the UNIX operating system and of the development tools.

Reason and Prerequisites

To generate the Apache Web server, you require the following software components:

    a source distribution of the Apache Web Server Release 1.3.xx or 2.0.xx or 2.2.xx

    a correctly installed ANSI C compiler for the relevant platform, including the relevant tools such as make, ld, ar and so on


In particular, you must ensure that the compiler can create executable 64-bit code. The only exception applies to Linux (IA32) platforms.

The compiler switches listed below are relevant for the compiler of the relevant operating system vendor. GNU compilers are supported only for Linux distributions. Please set the C/C++ compilers accordingly via the variables CC and CXX.

The following steps were tested using a Bourne shell.

You must unpack the source distribution in a temporary directory. If not specified otherwise, all commands are executed in the root directory of the source distribution, even if the scripts are located in subdirectories of the source distribution. Do not use administrator rights to execute any part of the generation. This prevents unintended installations in system directories.

In addition to this note, you must read the notes and installation guidelines contained in the distribution.

Every HP-UX system has pre-installed a compilation tool-chain, however, this tool-chain is not usable for usual programs, it is rather used _only_ for building the HP-UX kernel when the HP-UX system is updated. If you check the version output of cc, it will list a (bundled) compiler. So a HP full complilation suite should be used to build the apache. For e.g: HP Compiler suite version:aCC A.06.16.01.


Solution

    1. Overview:

              The Apache Web server is generated in three steps. Furthermore, you can generate a binary distribution in one step. The binary distribution has the advantage that you must specify the installation directory only at the time of the installation. Furthermore, you can install the exact same software level on several hosts after you created such a distribution once.

    2. Detailed description of the generation in three steps

        a) Generating the make environment (configure step)

                       With the script "Configure", you create the make files for the individual server components. You can use call parameters and environment variables to control the generation of the make files.

                       For information about the variables that you must set BEFORE executing the configure script, see the section "Compiler switches and linker switches for the different platforms".

                       To enable the content server or the cache server module to run on the Web server, you MUST activate Dynamic Shared Object (DSO) support (using the parameter "--enable-shared=max").

                       The configure call is identical for all platforms except hpia64:

                       For Apache 1.3:

                       configure --prefix= --enable-module=most --enable-shared=max
                       For Apache 2.0/2.2 on hpia64:

                       configure --prefix= --enable-mods-shared=most
          --with-mpm=prefork


                      For Apache 2.0/2.2 on all other platforms:

                     configure --prefix= --enable-mods-shared=most
          --with-mpm=prefork --with-expat=builtin

                       You can use the switch "--prefix" to set the absolute installation directory. The system copies all files that are required at runtime to this directory in the subsequent installation step. If you want to use a variable installation directory, you must create a binary distribution.

                       With "--with-mpm=prefork" we recommend configuring apache in prefork mode
as Unix inherently supports multiprocessing. You can also use the
setting "--with-mpm=worker" if you wish to run apache in multithreaded mode



        b) Generating the binary files (compile step)

                       You can use the call "make" to start the translation run of the source code. This step may take some minutes.

        c) Installing the binary files (install step)

                       You can use the command "make install" to copy all binary files, the default Web site, the online documentation and so on to the installation directory that you have specified in the configure step.

    3. Creating a binary distribution

              For the binary distribution, you must also set the required compiler parameters and linker parameters as environment variables before you call the generation commands.

              The script "./src/helpers/binbuild. sh" (Apache 1.3)

                          "./build/binbuild.sh" (Apache 2.0/2.2)
              executes the three phases configure, make and install.  In the step install, the binary distribution is created as a tape archive (tar). The script outputs the name and storage location of the archive. The binary distribution created using "binbuild.sh" supports Dynamic Shared Objects (DSO).

              After you unpacked the tar archive in the target host, perform the final installation using the following script:

              ./install-bindist.sh

                If you do not specify < InstDir>, the system installs the Web server in the directory /usr/local. Especially when you use the user root to execute this script, the Web server may easily be installed in an unrequired location and it may be difficult to remove the Web server again from this location. Therefore, we recommend that you perform the installation using the user ID under which the Web server is to run in the future. In this way, all group rights and owner rights are correctly assigned from the beginning.

    4. Compiler switches and linker switches for the different platforms:

              All additional compiler switchers and linker switches are set using the environment variables CFLAGS (compiler), LDFLAGS (linker) and EXTRA_LDFLAGS_SHLIB (linker indicators for shared libraries).

              These variables must be exported.

        a) Linux 32-bit (iA32)

                       For both Apache 1.3 and Apache 2.0/2.2

                       Additional switches are not required.

        b) Linux IA64 (64-bit)

                       For both Apache 1.3 and Apache 2.0/2.2

                       LDFLAGS="-L/lib64"

        c) Linux PPC64 (64-bit)

                       For both Apache 1.3 and Apache 2.0/2.2

                       CFLAGS="-m64", LDFLAGS="-m64 -L/lib64"

        d) Linux x86_64 (64-bit)

                       For both Apache 1.3 and Apache 2.0/2.2

                       LDFLAGS="-L/lib64"

        e) HP-UX ('PA-RISC' 64-bit)
                       For both Apache 1.3 and Apache 2.0/2.2

                       CFLAGS="+DA2.0W", LDFLAGS="+DA2.0W -lcl"

        f) HP-UX (IA64 64-bit)

                       For both Apache 1.3 and Apache 2.0/2.2

                       CFLAGS="+DD64 +DSitanium2", LDFLAGS="+DD64 +DSitanium2"

                       PLEASE ALSO REFER THE ATTACHED NOTE 940584 ON THIS PLATFORM

        g) SUN Solaris_64

                       For both Apache 1.3 and Apache 2.0/2.2

                       CFLAGS="-m64"

                h) SUN SolarisX64

                       CFLAGS="-m64"

        i) IBM AIX 5.1 and 5.2 (64-bit)
                       For Apache 1.3 only

                       CFLAGS="-q64", LDFLAGS="-q64", EXTRA_LDFLAGS_SHLIB="-b64"
                       For IBM, you must adjust the make file templates to correctly call the program "ar". Use the following command

            find . -name *.tmpl -print

                       to display all the make file templates.  The archiving program "ar" requires the switch "-Xany" to correctly create 64-bit archive files.

                       In the templates, search the command "ar cr" and replace it with "ar -Xany cr". Save your changes. Alternatively, you can also create the following shell script that automatically implements the changes in all templates:

x=`find . -name *.tmpl -print`
for i in $x
do
   cat $i | sed -e 's@ar cr@ar -Xany cr@g' > $i.sav
   mv $i.sav $i
done


                       Due to an inaccuracy in the configure script, AFTER you called "Configure", you must manually change the command "ar cr" to "ar -Xany cr" in the file ./src/modules/standard/Makefile.



                       For Apache 2.0/2.2 only

                       CFLAGS="-q64", LDFLAGS="-q64", EXTRA_LDFLAGS_SHLIB="-b64", OBJECT_MODE=64





More details refer snote : 664384 


Friday, 5 October 2018

SQL failed with error "column ambiguously defined" in HANA Database

Tags


Symptom
Execution of SQL statement failed with error:

SAP DBTech JDBC: [268]: column ambiguously defined: $rowid$: line col (at pos xxxx)

Environment

    HANA 1.0
    HANA 2.0

Reproducing the Issue

Execute the SQL statement via HANA studio SQL console or hdbsql
Cause

When internal columns such as $rowid$ is used, table name needs to be defined, otherwise, error message "column ambiguously defined" will occure
 
Resolution

If your SQL statement contains

ORDER BY "$rowid$"
And the execution failed with error "column ambiguously defined", you need to modify the statement to include the table name, for example:

ORDER BY ."$rowid$"

More details Refer Snote : 2695943

How to install HANA license by OS commands

Tags
Symptom

You need to use OS commands to access the HANA Database and manage the license.

Because there's no HANA Studio or HANA Cockpit available to manage the license for some reason.

Environment

    HANA Platform 1.0
    HANA Platform 2.0

Resolution

1. Logon as sidadm user, connect to the Database via hdbsql.

    If it is a single container system, please try below command:

> hdbsql -n :315 -i -u -p

    If it is a MDC system, please try connecting to the system DB by below command:

> hdbsql -n :313 -i -u -p

2. Enable multiline mode in hdbsql

hdbsql => \mu

3. You need a new license key, which you can download from Support Portal.

4. Enter statement  SET SYSTEM LICENSE ''.

Please note to add quotation mark outside the content of the license file you just downloaded.

5. Execute the statement by command: \g

hdbsql => \g


Remarks:
Install the new license via hdbsql cannot replace the old license. Please consider to delete the old license key before installing the new one.

You can also delete license keys by executing the SQL statement UNSET SYSTEM LICENSE ALL. 

More Details refer Snote :2690863

 
 

Friday, 14 September 2018

Unable to schedule database job due to ASE Error SQL4002


Symptom

     Schedule database job failed with following error:

    [ASE Error SQL4002][SAP][ASE ODBC Driver][Adaptive Server Enterprise] Login failed.
    Exception CX_DB6_CALENDAR_ACTION in class CL_DB6_CALENDAR_ACTION method REFRESH_ACTIONS line 64/ RC=1074

    DBACOCKPIT - DB connection test failed and warned same error '[ASE Error SQL4002][SAP][ASE ODBC Driver][Adaptive Server Enterprise] Login failed.'

Environment

  •     SAP Adaptive Server Enterprise (ASE) 15.7 Business Suite
  •     SAP Adaptive Server Enterprise (ASE) 16.0 Business Suite
Solution
  1.     Run T-code DBACOCKPIT;
  2.     On left panel -> Choose 'Database Connections';
  3.     Chose 'Remote Database Connection' - 'SAP ASE' and click 'Change User Credentials';
  4.     Input the correct password of DB user.

More details refer snote 2692998

Friday, 31 August 2018

HANA : How to set memory allocation limit for tenant databases

Tags

Symptom

You want to manage and control the memory usage of your multiple-container system by configuring global allocation limit for individual tenant databases.
Environment

    As of SAP HANA Database 1.0 SPS9
    SAP HANA Database 2.0

Resolution

You can use allocationlimit in [memorymanager] to limit the maximum amount of memory that can be allocated per process for all services of a tenant database.
For example, execute below command from the system database (allocationlimit value is in MB ):
< SPS11


ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini', 'DATABASE', '') SET ('memorymanager', 'allocationlimit') = '8192' WITH RECONFIGURE;

>= SPS11

ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'DATABASE', '') SET ('memorymanager', 'allocationlimit') = '8192' WITH RECONFIGURE;
For SPS09, you need to restart HANA database to take effect.

For later SPS,  memory alignment will happen on the fly, but take some time. To make it happen immediately, you can restart the database.

To confirm the changes with systemdb connection, you can use SQL query:

SELECT * FROM "SYS_DATABASES"."M_SERVICE_MEMORY";


More datails Refer Snote:2175606

Friday, 17 August 2018

ST22 dumps with runtime error CONVT_NO_NUMBER

ST22 dumps with runtime error CONVT_NO_NUMBER

Symptom

ST22 Dumps getting generated with runtime error: CONVT_NO_NUMBER
Following is the detailed ST22 dump that was constantly getting generated:

Category ABAP Programming Error
Runtime Errors CONVT_NO_NUMBER
Except. CX_SY_CONVERSION_NO_NUMBER
ABAP Program CL_HDB_ALERT_COLLECTOR_E2E====CP
Application Component HAN-DB

Short Text "1 since local time: "  cannot be interpreted as a number.

What happened? 

 
Error in the ABAP Application Program. The current ABAP program "CL_HDB_ALERT_COLLECTOR_E2E====CP" had to be terminated because it has come across a statement that unfortunately cannot be executed.
 
Environment

SAP HANA Platform Edition 1.0

SAP HANA Platform Edition  2.0

Resolution

Apply 2211415 - SAP HANA alerting composite SAP Note


More details refer Snote 2669097
 

Friday, 10 August 2018

How To Activate SAP HANA Memory Allocator Traces

Tags

Please follow the steps below in order to collect the allocator trace.

The hdbcons commands need to be executed as adm user from OS level.
The commands below are given for Single Database Container system.
If you are running MDC, please use hdbcons -p to execute sub commands for tenant DB.
In case you are running MDC in high isolation mode, please refer to SAP Note 2410143.
  •     If possible, clear the SQL Plan Cache or restart the system:
    ALTER SYSTEM CLEAR SQL PLAN CACHE;
 
  •     Reset possible existing trace entries. Please replace the allocator with the complete name including the hierarchy, e.g. the full string returned as CATEGORY from M_HEAP_MEMORY
    hdbcons "mm resetusage -r "
   
  • Enable astrace for the specified allocator
    hdbcons "mm flag -sr astrace,dstrace"
 
  •     Create initial Allocator Trace report, and write down the current size of the allocator. In case the SQL does not return any record, it means that there is currently no allocation on this allocator.
    hdbcons "mm top -l 20 " > report_0_$(date +%y%m%d%H%M%S).txt

    SELECT NOW(), HOST, PORT, CATEGORY, ROUND(EXCLUSIVE_SIZE_IN_USE/1024/1024) AS "SIZE(MB)" FROM M_HEAP_MEMORY WHERE PORT LIKE '%03' AND CATEGORY = '';
 
  •     In case there is a suspected query/application transaction, execute it to reproduce. Otherwise, wait and monitor the size of the allocation, until you see a noticeable growth compared to the starting point.
    Save the result of the current allocation size, create a report of Allocator Trace and generate a callgraph.

    SELECT NOW(), HOST, PORT, CATEGORY, ROUND(EXCLUSIVE_SIZE_IN_USE/1024/1024) AS "SIZE(MB)" FROM M_HEAP_MEMORY WHERE PORT LIKE '%03' AND CATEGORY = '';
 
    hdbcons "mm top -l 20 " > report_$(date +%y%m%d%H%M%S).txt
 
    hdbcons "mm cg -r " > mmcallgraph_$(date +%y%m%d%H%M%S).dot
 
    Note, it is the best to check the allocator size and collect the report when there is no load (or relatively low) on the system if possible. Only in that case, we can make sure that the stacks recorded in allocator trace are pointing to the leakage, instead of meaningful allocations for the active queries.
  •     Repeat step 5. and create 3-5 reports & callgraphs with the allocation size returned from the SQL.
  •     Disable astrace
    hdbcons "mm flag -dr astrace,dstrace"
  •     Cleanup
    hdbcons "mm resetusage "
 
  •     Zip the allocator size returned by SQL, Allocator Trace reports and callgraphs generated in step 4 and 5 and contact SAP Support for root cause analysis.

More details refer Snote : 2620830

Tuesday, 17 July 2018

How to Enable Granular Permissions in SAP ASE Database

Tags

Symptom

Protecting sensitive data from prying eyes is a must in today’s IT environment. Sybase ASE 15.7 ESD#2 has introduced a new security feature called Granular Permissions. This feature enables database administrators to fine-tune the separation of duties that has been in place since the introduction of role-based security. Granular permissions provide DBAs with the functionality to avoid security breaches, and have tighter control over which users can access sensitive data.

Grantable system privileges enable you to enforce the following security concepts:

    the separation of duties, which requires - for particular sets of operations - that no single individual is allowed to execute all operations within the set
    the principle of least privilege, which requires that all users in an information system should be granted as few privileges as are required to do the job

Enabling granular permissions reconstructs system-defined roles (sa_role, sso_role, oper_role, and replication_role) as privilege containers consisting of a set of explicitly granted privileges. You can revoke explicitly granted system privileges from system-defined roles and regrant them to these roles.






Solution

In SAP Business Suite Systems on SAP ASE, granular permissions are enabled when the system is installed on >= SAP ASE 16.0 SP03, when SAP ASE is upgraded to a version >= 16.0 SP03. Setup of granular permissions for SAP ASE logins sapsa, sapsso and roles sap_adm and sap_mon is performed by saphostctrl during SAP ASE upgrade. There is no additonal action required. The changes to SAP ASE user permission performed by saphostctrl during SAP ASE upgrade are documented below.

In case SAP ASE has been upgraded manually (i.e. saphostctrl was not used to perform the SAP ASE upgrade), then these steps can be performed to enable granular permissions manually:

    1. Enable granular permissions and unlock the sa SAP ASE login.
    Log on as user sapsso and execute the following commands:

    use master
    go
    exec sp_configure 'enable granular permissions',1
    go
    exec sp_locklogin sa, 'unlock'
    go

    2.Create users sapsso and sapsa in the master and database and grant SAP ASE server permissions
    Log on as user sa and execute the following commands:

    use master
    go
    if not exists ( select 1 from sysusers where name = 'sapsso' )
    begin
    exec sp_adduser 'sapsso'
    end
    go
    if not exists ( select 1 from sysusers where name = 'sapsa' )
    begin
    exec sp_adduser 'sapsa'
    end
    go
    grant manage master key to sapsa
    go
    grant manage server permissions to sapsso
    go

    3.Create user sapsso in the SAP database and grant necessary permissions
    use
    go
    if not exists ( select 1 from sysusers where name = 'sapsso' )
    begin
    exec sp_adduser 'sapsso'
    end
    go
    grant manage database permissions to sapsso
    go
    grant manage database encryption key to sapsso
    go
    grant select on sysobjects to sapsso
    go
    grant manage any object permission to sapsso
    go
    

   4.Grant select permission on SVERS resp. BC_DDDBTABLERT to role sap_mon
    If this is an ABAP instance log in to SAP ASE as sa and execute these commands


    use
    go
    setuser 'SAPSR3'
    go
    grant select on SAPSR3.SVERS to sap_mon'
    go
    setuser

    If this is a JAVA instance log in to SAP ASE as sa and execute these commands

    use
    go
    setuser 'SAPSR3DB'
    go
    grant select on SAPSR3DB.BC_DDDBTABLERT to sap_mon
    go
    setuser


    5.Revoke permissions from dbo to restrict access to user data
    Log on as user sapsso and execute the following commands:


    use
    go
    revoke setuser from dbo granted by dbo
    go
    revoke alter any object owner from dbo granted by dbo
    go
    revoke manage any user from dbo granted by dbo
    go


    6.Grants to dbo needed by R3load (DB refresh) and CDS support
    Log on as user sapsso and execute the following commands:


    use
    go
    grant drop any object to dbo
    go
    grant create any function to dbo
    go

    7.Grants to sap_adm and sap_mon to allow maintenance functionality 
    Log on as user sapsso and execute the following commands:


    use
    go
    grant reorg any table to sap_adm
    go
    grant manage any statistics to sap_adm
    go
    use master
    go
    grant monitor qp performance to sap_mon
    go


    8.Lock user sa
    Log on as user sapsso and execute the following commands:

    use master
    go
    exec sp_locklogin sa, 'lock'
    go


    9.Optimize the SAP ASE server configuration for use of granular permissions
    Log on as user sapsa and execute the following commands:


    use master
    go
    exec sp_configure 'permission cache entries', 1024
    go













More details refer Snote:2106688





Cannot Start HANA tenant DB due to "Error while resolving groupname"


Symptom

    HANA tenant DB cannot be started
    Following error message can be found in indexservice trace.

f Service TrexService.cpp(00551) : FATAL: initialization of service failed with exception exception 1: no.7100007 (MultiDB/impl/MultiDBConfiguration.cpp:1310)
Error while resolving groupname rc=2: No such file or directory

Environment

    HANA 1.0
    HANA 2.0

Reproducing the Issue

 Start HANA
Cause

adm user hasn't been assigned to group "sapsys". It can be caused by manualy recreating group "sapsys" or configuring high isolation multitenant DB wrongly.
 In folloiwng example adm has been assigned with gourp id 79, while group id for "sapsys" is actual 456.

cat /etc/passwd |grep HANA
adm:x:1001:79:SAP HANA Database System Administrator:/usr/sap//home:/bin/bash

cat /etc/group |grep sapsys
sapsys:x:456:


Resolution

    If your HANA isolation level is low (default), re-asign user adm with primary group "sapsys" by using following command:

        usermod -g sapsys adm

    If your HANA isolation level is high, refer to SAP HANA Tenant Database guide to configure user group.



 More details refer Snote: 2670327






















Friday, 8 June 2018

SAP HANA database license management


SAP HANA database license management

About SAP HANA database Licenses

     License keys are required to use SAP HANA Database. We can install and delete the license keys using the SAP HANA Studio, SAP HANA HDBSQL command line tool and HANA SQL Query Editor.


There are two types of HANA License Key.

     The SAP HANA database supports two kinds of license key.

  •                     Temporary License Key
  •                     Permanent License Key

Temporary License Key:

            The temporary license keys are automatically installed with a new SAP HANA database installation. It is valid for 90 days from installation date. During this period we should request in service market place and apply permanent license key.

Permanent License Key:

The permanent license keys are valid until the predefined expiration date. It has to be requested on the SAP Service Marketplace under Keys & Requests and applied to the individual SAP HANA database. Furthermore they specify the amount of memory licensed to the target SAP HANA installation.

Note: Before a permanent license key expires, we should request and apply a new permanent license key. If it is expires, a temporary license key valid for 28 days is automatically installed. During this time, we can request and install a new permanent license key again.


There are two types of permanent license key available for SAP HANA

  •                     Unenforced (SWPRODUCTNAME=SAP-HANA)
  •                     Enforced (SWPRODUCTNAME=SAP-HANA-ENF)

Unenforced License:

If an unenforced license key is installed, the operation of SAP HANA is not affected if its memory consumption exceeds the licensed amount of memory.

Enforced License:

If enforced license key is installed the system will be locked if memory consumption of HANA exceeds the license amount of memory plus some tolerance. If this happens HANA has to be restarted or a new license key should be requested, or a new license key that covers the amount of memory in use needs to be installed.

Source from SAP Fourm.https://blogs.sap.com

HANA Database installation Librarie GCC 6.x error in Linux


Symptom

In order to run SAP applications compiled with GCC 6.x on RHEL or SLES, additional operating system software packages are required to be installed.

Please note:
 
For better readability of this SAP note - if not mentioned explicitely otherwise - "RHEL" is used as a synonym for "RHEL for SAP" and "RHEL for SAP HANA", and "SLES" is applicable for "SLES for SAP Applications" as well.

Other Terms

GCC 6.x compat-sap-c++-6 libgcc_s1 libstdc++6

Reason and Prerequisites

In order to run SAP applications which were compiled with GCC 6.x, a newer compiler version than originally delivered with RHEL 7 and SLES 12, additional runtime environment packages for GCC 6.x need to be installed before running such an SAP application. Otherwise, the SAP application no longer starts and it issues error messages such as:

/usr/lib64/libstdc++.so.6: version `CXXABI_1.3.8' not found (required by )
/usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.18' not found (required by )
/usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.19' not found (required by )
/usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by )
/usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by )
/usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.22' not found (required by )

 Additional example of HANA 2 on RHEL 7.x:

SAP HANA Database installation kit detected.
Installation failed
  Checking system requirements failed
    Cannot access required library '/opt/rh/SAP/lib64/compat-sap-c++-6.so': No such file or directory
    Please install the rpm package 'compat-sap-c++-6'!
  
 Additional example of HANA 2 on SLES 12 SP1:

SAP HANA Database installation kit detected.
Installation failed
  Checking system requirements failed
    rpm package 'libgcc_s1' needs at least version 6.2. (current version = 5.2.1+r226025)
    rpm package 'libstdc++6' needs at least version 6.2. (current version = 5.2.1+r226025)
    The operating system is not ready to perform gcc 6 assemblies

Solution

In order to run SAP applications compiled with GCC 6.x on RHEL or SLES, the required compiler runtime libraries need to be installed or updated. Starting with RHEL 7 and SLES 12, the required GCC 6.x runtime is available via the normal software update repositories.

Please note: You can also operate older versions of SAP applications on this system after installing the GCC 6.x libraries. There is no need to reboot.

    RHEL 7

    The RPM package compat-sap-c++-6 needs to be installed additionally to the standard compiler runtime libraries:
    In order to be able to get access to the library customers need subscriptions for “Red Hat Enterprise Linux for SAP Solutions” or in case for non SAP HANA alternatively "Red Hat Enterprise Linux for SAP Business Applications". With this subscription you can subscribe your server to the "RHEL Server SAP" or "RHEL for SAP HANA" channel on the Red Hat Customer Portal or your local Satellite server. After subscribing a server to the channel the output of "yum repolist" must contain the following:
    rhel-x86_64-server-sap- RHEL Server SAP (v. for 64-bit )
    rhel-x86_64-server-sap-hana- RHEL Server SAP HANA (v. for 64-bit )

    Afterwards the compat-sap-c++-6 package can be installed with the following command:

    # yum install compat-sap-c++-6

    Minimum version is compat-sap-c++-6, for example:

    # rpm -q compat-sap-c++-6
    compat-sap-c++-6.el7_2.x86_64



    SLES 12

    The RPM packages libgcc_s1 and libstdc++6 need to be installed resp. updated. Please proceed as follows:

    # zypper install libgcc_s1 libstdc++6

    Minimum versions are libgcc_s1-6.2.1 and libstdc++6-6.2.1, for example:

    # rpm -q libgcc_s1 libstdc++6
    libgcc_s1-6.2.1.x86_64
    libstdc++6-6.2.1.x86_64

 
Symptom

In order to run SAP applications compiled with GCC 6.x on RHEL or SLES, additional operating system software packages are required to be installed.

Please note:
For better readability of this SAP note - if not mentioned explicitely otherwise - "RHEL" is used as a synonym for "RHEL for SAP" and "RHEL for SAP HANA", and "SLES" is applicable for "SLES for SAP Applications" as well.
Other Terms

GCC 6.x compat-sap-c++-6 libgcc_s1 libstdc++6
Reason and Prerequisites

In order to run SAP applications which were compiled with GCC 6.x, a newer compiler version than originally delivered with RHEL 7 and SLES 12, additional runtime environment packages for GCC 6.x need to be installed before running such an SAP application. Otherwise, the SAP application no longer starts and it issues error messages such as:

/usr/lib64/libstdc++.so.6: version `CXXABI_1.3.8' not found (required by )
/usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.18' not found (required by )
/usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.19' not found (required by )
/usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by )
/usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by )
/usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.22' not found (required by )



Additional example of HANA 2 on RHEL 7.x:

SAP HANA Database installation kit detected.
Installation failed
  Checking system requirements failed
    Cannot access required library '/opt/rh/SAP/lib64/compat-sap-c++-6.so': No such file or directory
    Please install the rpm package 'compat-sap-c++-6'!
    For more information, see SAP Note 2455582.



Additional example of HANA 2 on SLES 12 SP1:

SAP HANA Database installation kit detected.
Installation failed
  Checking system requirements failed
    rpm package 'libgcc_s1' needs at least version 6.2. (current version = 5.2.1+r226025)
    rpm package 'libstdc++6' needs at least version 6.2. (current version = 5.2.1+r226025)
    The operating system is not ready to perform gcc 6 assemblies
    For more information, see SAP Note 2455582.
Solution

In order to run SAP applications compiled with GCC 6.x on RHEL or SLES, the required compiler runtime libraries need to be installed or updated. Starting with RHEL 7 and SLES 12, the required GCC 6.x runtime is available via the normal software update repositories.

Please note: You can also operate older versions of SAP applications on this system after installing the GCC 6.x libraries. There is no need to reboot.

    RHEL 7

    The RPM package compat-sap-c++-6 needs to be installed additionally to the standard compiler runtime libraries:
    In order to be able to get access to the library customers need subscriptions for “Red Hat Enterprise Linux for SAP Solutions” or in case for non SAP HANA alternatively "Red Hat Enterprise Linux for SAP Business Applications". With this subscription you can subscribe your server to the "RHEL Server SAP" or "RHEL for SAP HANA" channel on the Red Hat Customer Portal or your local Satellite server. After subscribing a server to the channel the output of "yum repolist" must contain the following:
    rhel-x86_64-server-sap- RHEL Server SAP (v. for 64-bit )
    rhel-x86_64-server-sap-hana- RHEL Server SAP HANA (v. for 64-bit )

    Afterwards the compat-sap-c++-6 package can be installed with the following command:

    # yum install compat-sap-c++-6

    Minimum version is compat-sap-c++-6, for example:

    # rpm -q compat-sap-c++-6
    compat-sap-c++-6.el7_2.x86_64


    SLES 12

    The RPM packages libgcc_s1 and libstdc++6 need to be installed resp. updated. Please proceed as follows:

    # zypper install libgcc_s1 libstdc++6

    Minimum versions are libgcc_s1-6.2.1 and libstdc++6-6.2.1, for example:

    # rpm -q libgcc_s1 libstdc++6
    libgcc_s1-6.2.1.x86_64
    libstdc++6-6.2.1.x86_64
 
More information see SNote 2455582

Saturday, 7 April 2018

SUM ERROR "M_PREMA", "V_7BR_PREMA" and "V_T7BRAP" tables error in ACT_TRANS or ACT_UPG phase




"M_PREMA", "V_7BR_PREMA" and "V_T7BRAP" tables error in ACT_TRANS or ACT_UPG phase ERROR

Symptom
During upgrade process, the ACT_UPG phase terminates with DDIC ACTIVATION errors "TABLE XXXX was not activated" as per the below error message:

Directory:    /SUM/abap/log
Name      :     ACTUPG.ELG

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

DDIC ACTIVATION ERRORS and RETURN CODE in SAPAAAA731.ABC
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


   3 EDT014XActivate dependent table "M_PREMA"
   2WEAD275 Foreign key "M_PREMA"-"BUKRS" (dependency factor "REF" is
   incorrect here)
   2WEAD275 Foreign key "M_PREMA"-"OBRA" (dependency factor "REF" is
   incorrect here)
    1EEDT354 Search field "M_PREMA"-"FILIA" not contained in search help
    attachment
   3 EDT015 Dependent table "M_PREMA" was not activated
   3 EDO525XActivate dependent view "M_PREMA"
   1 EDH202 Check dependent search help "PREMA"
   1 EDH103 Search help "PREMA" is consistent
   3 EMC763 Key field "T7BRAP"-"GRPBR" missing
   2WEMC732 All fields are evaluated as key field
   3 EMC726 View must be created in the database
   3 EDO526 View was activated with warnings"M_PREMA"
    1EEDO519 "Table" "M_PREMA" could not be activated

   3 EDT014XActivate dependent table "V_7BR_PREMA"
   2WEAD275 Foreign key "V_7BR_PREMA"-"BUKRS" (dependency factor "REF" is
   incorrect here)
   2WEAD275 Foreign key "V_7BR_PREMA"-"OBRA" (dependency factor "REF" is
   incorrect here)
    1EEDT354 Search field "V_7BR_PREMA"-"FILIA" not contained in search help
    attachment
   3EDT015 Dependent table "V_7BR_PREMA" was not activated

   3 EDO525XActivate dependent view "V_7BR_PREMA"
   1 EDH202 Check dependent search help "HRPADBRA"
   1 EDH103 Search help "HRPADBRA" is consistent
   3 EMC763 Key field "T7BRAP"-"GRPBR" missing
   2WEMC732 All fields are evaluated as key field
   3 EMC726 View must be created in the database
   3 EDO526 View was activated with warnings"V_7BR_PREMA"
    1EEDO519 "Table" "V_7BR_PREMA" could not be activated

   3 EDT014XActivate dependent table "V_T7BRAP"
   1EEDT354 Search field "V_T7BRAP"-"FILIA" not contained in search help
   attachment
   3 EDT015 Dependent table "V_T7BRAP" was not activated
   3 EDO525XActivate dependent view "V_T7BRAP"
    1EEDO519 "Table" "V_T7BRAP" could not be activated
   1 ETP111 exit code : "8"

Reproducing the Issue

  • Start SUM tool.
  • In ACT_UPG phase, the error occurs.

Cause

The root cause of the issue is some fields were removed from the table T7BRAP. This action is delivered by standard packages however this is not reflecting in other objects like M_PREMA.

On the other hand M_PREMA is a Match code object which is obsolete.

Resolution


You can ignore this error and continue with the upgrade only for the specified tables.

Continue the upgrade by choosing the option 'Accept non-severe errors and repeat phase MAIN_SHDRUN/ACT_UPG' in SUM tool.




More details Refer Snote : 1909524




Wednesday, 7 February 2018

Analyzing log volume full situations in HANA

Tags

Analyzing log volume full situations


Symptom

Alerts having ID 2 "Alert Disk usage" for "storage type: LOG" are generated.
The log volume is full, thus the database cannot be started or does not accept new requests. Note that HANA Studio or OS commands like "df -h" might show a different output (indicating no problem at all) when a cluster file system like HP IBRIX or IBM GPFS is used. In this case, use the filesystem-specific commands like "mmdf/mmrepquota" for IBM GPFS.
The database trace file of a particular service contain "rc=24 no space left on device errors" for the basepath_logvolumes or basepath_logbackup

Reason and Prerequisites

The disk usage of log volumes only grows, if there are no more segment files available for overwriting/re-use (having state FREE, see Db-view m_log_segments). Log segments are available for re-use when they have been successfully backed up (in log_mode = normal) and are not required for a database restart. The aim of this SAP Note is to help you identify the root cause, why log segments are not getting freed and remain in state TRUNCATED (indicating that the log-segment has not yet been backed up successfully). In a log volume full situation, no free log-segments are available for re-use & no more new log segments can be allocated due to limited disk quota/size, thus the database cannot be started or stops accepting requests. Apart from bringing the system up again as soon as possible, the root cause needs to be investigated and resolved. Otherwise, you may run into log volume full situation again soon.


Solution

Do not remove either data files or log files using operating system tools as this will corrupt the database! Follow SAP Note 1679938 to temporarily free up space in the log volume, this way you should be able to start up the database for root cause analysis and problem resolution (after the situation is resolved, please undo the workaround). The questionnaire below might help you to narrow down the root cause:

1) Which database service is affected and what is the current state of the log-segments allocated?

In case the database accepts SQL requests:

Execute the below SQL statement:

select b.host, b.service_name, a.state, count(*) from "PUBLIC"."M_LOG_SEGMENTS" a join "PUBLIC"."M_SERVICES" b on (a.host = b.host AND a.port = b.port) group by b.host, b.service_name, a.state;

1.a) If most of the log segments are in state FREE, this is an indicator for a similar log volume full situation in the past. Although the log volume appears to be full, the free & already allocated (e.g. indexserver) log segments will be re-used. You can release the space occupied by the allocated log-segments in your file system by executing the SQL statement:

ALTER SYSTEM RECLAIM LOG;

1.b) If most of the log segments are in state TRUNCATED, there appears to be a problem with log backup which needs to be identified using the next steps. In case you see a lot of log segments in state BACKEDUP this indicates a problem with a (hanging) savepoint. See the table in the bottom of this section for details on the different states.

In case the database is offline:

Execute the below command in the log directory e.g. /hana/log//mnt00001 for single node systems. This way we can identify the service affected, furthermore it might help to decide which volume to choose for the temporary workaround presented in SAP Note 1679938.

File-count Disk-usage
for i in $(find . -type d ) ; do 
    echo $i ; 
    ( find $i -type f | wc -l ) ; 
done
du -h

./hdb00001
5
./hdb00002
8
./hdb00003
95
130M    ./hdb00001
50M     ./hdb00002
93G     ./hdb00003

It will give you a file-count / disk-usage for each of the subdirectories (representing a volume) hdb00001, hdb00002, hdb00003, hdb0000... You can identify the relevant service by searching for "volume=" in the topology.txt contained in the fullsysteminfodump or by searching the trace file of each service individually for "assign to a volume started".

Using the command "hdblogdiag seglist" on the log segment directory identified, you can find out the state of the log segments:

hdblogdiag seglist /hanalog/SOH/mnt00001/hdb00003
LogSegment[0/90:0x70cb3180-0x7108fa40(0xf723000B)/<...>,TS=2014-10-24 12:31:59.480279/Free/0x0]@0x00007fd7696bfb00
LogSegment[0/92:0x7108fa40-0x7108fb40(0x4000B)/<...>,TS=2014-11-04 02:19:58.774171/Free/0x0]@0x00007fd7696c0000
LogSegment[0/91:0x7108fb40-0x71091300(0x5f000B)/<...>,TS=2014-11-07 00:46:58.377588/Free/0x0]@0x00007fd7696bfd80
LogSegment[0/3:0x71091300-0x710f6780(0x1952000B)/<...>,TS=2014-11-07 02:18:50.000233/Free/0x0]@0x00007fd76a710400
LogSegment[0/6:0x710f6780-0x710f6900(0x6000B)/<...>,TS=2014-11-07 02:50:16.406526/Free/0x0]@0x00007fd76a710b80
LogSegment[0/0:0x710f6900-0x71144f40(0x1399000B)/<...>,TS=2014-11-07 03:02:03.221834/Free/0x0]@0x00007fd76a70fc80  
...

To get e.g. the count of the log segments being in state Free/Truncated/Writing execute the below command:

sohdb:/hanalog/SOH/mnt00001/hdb00003> hdblogdiag seglist /hanalog/SOH/mnt00001/hdb00003 | grep -i Free | wc -l

91

sohdb:/hanalog/SOH/mnt00001/hdb00003> hdblogdiag seglist /hanalog/SOH/mnt00001/hdb00003 | grep -i Truncated | wc -l

0

sohdb:/hanalog/SOH/mnt00001/hdb00003> hdblogdiag seglist /hanalog/SOH/mnt00001/hdb00003 | grep -i Writing | wc -l

1

In general, a log-segment can have one of the following states:

State                Description
Writing         Currently writing for this segment.
Closed         Segment is closed by the writer.
Truncated Truncated, but not yet backed up. The backup will remove it.
BackedUp Segment is already backed up, but a savepoint has not yet been written. Therefore it needs to be kept for instance recovery.
Free Segment is free for reuse.


2) Do you have the automated log backup enabled?

You can check the corresponding database configuration parameter "enable_auto_log_backup" in global.ini either in the configuration tab in HANA Studio when the database is online, or on filesystem level in the location where the custom configuration files are located (logon as adm onto the HANA appliance)

cdglo (usually resolves to /usr/sap//SYS/global)

cd hdb/custom/config

If it has been deactivated (value = no), revert the parameter to its default value "yes" to activate automated log backup again.



3) Where is the storage location for the log backups?

Does this parameter point to a mounted/external storage location or to the local filesystem (see also the database parameter: global.ini > "basepath_logbackup")?

3.a) If it resolves to external storage, does the tracefile of the affected service show I/O related errors on writing to this location? If this is true, check the external storage for availability and I/O errors.

3.b) If it resolves to the local filesystem, are the "basepath_logbackup" and the "basepath_logvolumes" located on the same disk? When this true, the log backups very likely filled up the entire disk and thus neither new log segments can be created nor old ones can be backed up. In order to resolve the issue - depending on your backup strategy - either move the log backups to a different location to free up space or perform a full data backup and delete older (log) backups from the catalog & file-system using the HANA Studio Backup Editor (for details also the SAP HANA Administration Guide).



4) What is the throughput on writing to the basepath_logbackup location? To check that execute either:

4.a) The SQL Statement below:

select v.host, v.port, v.service_name, s.type, round(s.total_read_size / 1024 / 1024, 3) as "Reads in MB", round(s.total_read_size / case s.total_read_time when 0 then -1 else s.total_read_time end, 3) as "Read Througput in MB", round(s.total_read_time / 1000 / 1000, 3) as "Read Time in Sec", trigger_read_ratio as "Read Ratio", round(s.total_write_size / 1024 / 1024, 3) as "Writes in MB", round(s.total_write_size / case s.total_write_time when 0 then -1 else s.total_write_time end, 3) as "Write Throughput in MB", round(s.total_write_time / 1000 / 1000, 3) as "Write Time in Sec", trigger_write_ratio as "Write Ratio" from "PUBLIC"."M_VOLUME_IO_TOTAL_STATISTICS_RESET" s, PUBLIC.M_VOLUMES v where s.volume_id = v.volume_id and type ='LOG_BACKUP' order by type, service_name, s.volume_id;

4.b) Or create a dummy file in the basepath_logbackup using:

Writing to the log area dd if=/dev/zero of=/hanalog/SOH/mnt00001/testfile bs=1M count=1024
Reading from the log area dd if=/hanalog/SOH/mnt00001/testfile of=/tmp/testfile bs=1M count=1024
Note that the testfile first needs to be written before it can be read back again. Please be very careful with the input (if) / output (of) locations used as you can easily overwrite existing files!

4.c) Set the below trace level to print additional information:

Enable the trace
                                                                                                                                                    ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('trace', 'stream') = 'interface' with reconfigure;



Disable the trace       
                                                                                                                                                       ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') UNSET ('trace', 'stream') with reconfigure;


Note that in case the database does not accept SQL requests, you need to maintain this parameter by directly adding:

[trace]
stream = interface

in global.ini in the path holding the custom parameter files (see point 2). In addition the database needs to be reconfigured (by calling "hdbnsutil -reconfig", might not work depending on the state of the database) or restarted to make the changes active. This increased trace level works both for data and log backups:

[28055]{-1}[-1/-1] 2014-11-13 20:09:28.755532 a Stream ChannelUtils.cpp(00365) : SynchronousCopyHandler::doCopy finished, source="/logsegment_000_00000002.dat" (mode= RW, access= rwrwr-, flags= DIRECT|MUST_EXIST|LAZY_OPEN), factory= (root= "/usr/sap/UPD/SYS/global/hdb/log/mnt00001/hdb00002/" (access= rwrwr-, flags= AUTOCREATE_DIRECTORY, usage= LOG, fs= ext3, config= (AsyncWriteSubmitActive=auto,AsyncWriteSubmitBlocks=new,AsynReadSubmit=off,#SubmitQueues=1,#CompletionQueues=1))), destination="/.log_backup_2_0_538414336_538623872.1415905765532" (mode= W, access= rwrwr-, flags= DIRECT|UNALIGNED_SIZE|TRUNCATE|MULTI_WRITERS), factory= (root= "/usr/sap/UPD/HDB00/backup/log/" (access= rwrwr-, flags= AUTOCREATE_PATH|DISKFULL_ERROR, usage= LOG_BACKUP, fs= ext3, config= (AsyncWriteSubmitActive=auto,AsyncWriteSubmitBlocks=new,AsynReadSubmit=off,#SubmitQueues=1,#CompletionQueues=1))) copySize=13410304, copyTimeUs=2738384 totalReceiveTimeUs=2647457, minReceiveThroughput=4.83MB/s, maxReceiveThroughput=4.83MB/s, avgReceiveThroughput=4.83MB/s, receiveCount=1 totalSendTimeUs=88337, minSendThroughput=144.78MB/s, maxSendThroughput=144.78MB/s, avgSendThroughput=144.78MB/s, sendCount=1



5) What is the size of the backup catalog, the time needed for its backup and the log backup frequency?

What is the size of the backup catalog?

In case the database accepts SQL requests:

The below SQL statement will give you the backup catalog size in Mb of the latest 10 backup catalog backups:

select top 10 round(backup_size/1024/1024,2) from"PUBLIC"."M_BACKUP_CATALOG_FILES"where source_type_name = 'catalog' orderby backup_id desc;


In case the database is offline:

Go to the basepath_logbackup and execute in the log subdirectory:

ls -lth log_backup_0_0_0_0* | head -n 10

linux-upd:/usr/sap/UPD/HDB00/backup/log> ls -lth log_backup_0_0_0_0* | head -n 10
-rw-r--r-- 1 updadm sapsys   11M Nov  6 23:32 log_backup_0_0_0_0.1415313121836
-rw-r--r-- 1 updadm sapsys   11M Nov  6 23:28 log_backup_0_0_0_0.1415312899446
-rw-r--r-- 1 updadm sapsys   11M Nov  6 23:28 log_backup_0_0_0_0.1415312898184
-rw-r--r-- 1 updadm sapsys   11M Nov  6 23:28 log_backup_0_0_0_0.1415312896945
-rw-r--r-- 1 updadm sapsys   11M Nov  6 23:28 log_backup_0_0_0_0.1415312895417
-rw-r--r-- 1 updadm sapsys   11M Nov  6 23:27 log_backup_0_0_0_0.1415312823518
-rw-r--r-- 1 updadm sapsys   11M Nov  6 23:17 log_backup_0_0_0_0.1415312221722
-rw-r--r-- 1 updadm sapsys   11M Nov  6 23:02 log_backup_0_0_0_0.1415311321688
-rw-r--r-- 1 updadm sapsys   11M Nov  6 22:47 log_backup_0_0_0_0.1415310421760
-rw-r--r-- 1 updadm sapsys   11M Nov  6 22:47 log_backup_0_0_0_0.1415310419517



What is the value of the database configuration parameter "log_backup_timeout_s" (in global.ini)? Is this parameter set to a smaller than the default value 300 (seconds)? In case this amount of time passes before the segment is full, the segment will be closed prematurely and put to the log segment backup queue.

Thus if the log backup frequency is too high and the time needed for the "backup catalog" backup itself takes (due to its size) longer than the time needed to write a log backup, this might cause a queuing situation where log segments cannot be released due to not yet performed "backup catalog" backups. You can identify the "backup catalog" backups using the below lines in the backup.log:

2014-11-06T20:46:10+01:00 P026822 14986a489b2 INFO LOGBCKUP state of service: nameserver, linux-upd:30001, volume: 0, BackupExecuteCatalogBackupInProgress
2014-11-06T20:46:10+01:00 P026822 14986a489b2 INFO LOGBCKUP state of service: nameserver, linux-upd:30001, volume: 0, BackupExecuteCatalogBackupFinished

In this situation either reduce the size of the backup catalog (delete older backups) or improve the I/O performance to the log backup location. If a lot of log segments are generated due to the high amount of changes on the data, you may consider increasing the log segment size of the particular service accordingly (see "log_segment_size_mb" parameter in the ini-file of the particular service).



6) What is the time difference between log backup process states "100%" and "BackupLogSuccess"?

In general this is the time that is consumed by updating the backup catalog itself and writing it to the (external) filesystem location. Only after these steps were completed, a log backup is considered as successful. If you still see a lot of log segments in state TRUNCATED, please check the time difference between log backup process states "100%" and "BackupLogSuccess" for a particular log backup in backup.log.

backup.log

2014-11-17T04:44:00+00:00  P012889      149bc0de8ff INFO    LOGBCKUP progress of service: indexserver, linux-upd:30003, volume: 4, 88% 939524096/1073229824
2014-11-17T04:44:02+00:00  P012889      149bc0de8ff INFO    LOGBCKUP progress of service: indexserver, linux-upd:30003, volume: 4, 100% 1073229824/1073229824
...
2014-11-17T04:54:43+00:00  P012889      149bc0de8ff INFO    LOGBCKUP state of service: indexserver, linux-upd:30003, volume: 4, BackupLogSuccess

In this example, the time difference is about 10 minutes, which is very long and could be responsible for the "queueing" situation. If the I/O performance is ok (see step 5), first try to reduce the size of the backup catalog (which should result in shorter log backup post-processing time). If reducing the backup catalog did not help or is not feasible, you can also increase the values of the two parameters log_segment_size_mb and log_backup_timeout_s in global.ini, so that less log backups are performed and log backup queuing in this particular scenario is avoided.

[persistence]

log_segment_size_mb = 4096
log_backup_timeout_s = 1200

Note that changing the log_backup_timeout_s has an impact on the Recovery Point Objective (this interval is the maximum time span of data that will be lost if the log area cannot be used for recovery). The value 1200 (seconds) is a starting point and can be increased up to 3600 (default = 900). See the SAP HANA Administration Guide (SPS 8) chapter 4.2.4.2.3 Log Backup Options for details.



7) Do you use third-party backup software (using Backint) to perform the log backups?

Is the database configuration parameter "log_backup_using_backint" in global.ini set to true? If no, should it be enabled? If yes, check the below steps.

6.a) Are there errors reported in the backint.log?

6.b) On checking the backup.log, did a previous log backup attempt of a particular service terminate prematurely because of OOM or a particular SIGNAL? If yes, is the corresponding pipe & backup agent process - using the pid (process id) from the backint.log still running on HANA operating system level (use "ps -ef | grep ")? In case this is true, kill the dangling backup agent process manually.

backint.log

2014-08-27 01:56:20.000 backint started:
  command: /usr/sap/UPD/SYS/global/hdb/opt/hdbbackint -f backup -p /hana/shared/UPD/global/hdb/opt/initEH0.utl -i /var/tmp/hdbbackint_UPD.3wmm3a -  /var/tmp/hdbbackint_UPD.Qkgptl -u UPD -s 1409097380583 -c 1 -l LOG
  pid: 13506
  input:
  #SOFTWAREID "backint 1.04" "HANA HDB server 1.00.74.03.392810"
  #PIPE "/usr/sap/UPD/SYS/global/hdb/backint/log_backup_3_0_11908452928_11909495616"



8) If none of the steps presented so far helped to identify the issue:

Open an incident with SAP HANA Product Support on component HAN-DB or HAN-DB-BAC (for backup related issues) and provide the below information:

Increase the database tracelevel for component "backup" to level debug and re-create the issue.
Enable the trace ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('backup', 'trace') = 'debug' with reconfigure;
Disable the trace ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') UNSET ('backup', 'trace') with reconfigure;
Create a full-system-info-dump according to SAP Note 1732157.
In case your hardware vendor is IBM, please provide also the output of the 1661146 (IBM Check Tool for SAP HANA appliances)
Open a SSH (see SAP Note 1275351) remote connection to your system and add valid OS logon credentials to the Secure Area.

More details Refer KBA 2083715