SAP Create PDF by Print Preview of Smartform

** Myself

Go to from and choose  print preview

Select OutputDevice & Print preview
!! OutputDevice Need to use   PDF device   = PDFUC  (Thai format need to upload thai font to PDFUC)

Sol1
select
Print Preview > Menu: Goto > PDF Preview
Smartform display in window show content by IE  preview (smicm port  80XX)

The smartform will be displayed as PDF in the PDF preview window.


Sol2
select
OK prompt > pdf!








New window popup to show to convert smartform to PDF file




Save to a PDF format

Change Password of SAP User TMSADM

** Myself
1.Add record TMSCROUTE all system dev,qas,prd
As mentioned in note 749977 you must create an entry in TMSCROUTE table with:
     sysnam = ,TRGCLI
     rfcroute = 000
2.Update from DEV 000 se38->TMS_UPDATE_PWD_OF_TMSADM all password

3. change sm59 all system dev,qas,prd

4.Distribute configuration


Note 1568362 - TMSADM password change
Environment
  • Database and operating system independent.
  • SAP releases 46B (SAP_BASIS) and higher.
Cause
  • Missing function
  • Security requirements
Resolution
When someone needs to have more stringent password rules, user TMSADM is affected by the new restrictions. In order to be able to enter a password for TMSADM user different from the standard one that complies with the password restrictions that have been set to the system, the following steps must be performed.
These are manual steps and are ideal for a system landscape that is small.
1. Apply the source code correction of notes 713622 and 749977 into all the domain systems. We can say that it will "enable" the functionality of modifying the TMSADM password. Should your support package level already contain the correction instructions from these notes, please proceed to step 2.

2. As mentioned in note 749977 you must create an entry in TMSCROUTE table with:
     sysnam = ,TRGCLI
     rfcroute =
where is the client number that the customer wants to have in the logon screen. If you want to use the default client, you must leave the rfcroute field empty for sysid=,TRGCLI
As explained in note 761637, we recommend to set the client to 000.

3. Proceed as explained in note 761637. That is,
   A. Create an entry in table TMSCROUTE with:
        sysnam = ,ADMPWD
        rfcroute = USER
      It will change the TMSADM behaviour, so password for that user can be modified

B. Now you must go to SM59, open the 'R/3 connections' node and delete the TMS* RFC connections. After that you must regenerate the RFC connections doing the following:
      STMS   > Overview   > Systems   > Extras   >
                  > Generate RFC Destinations
This will create the new RFC connections for the Transport Management System where the TMSADM password can be changed.

C. Now you should be able to maintain the passwords of the TMSADM users and the TMSADM@. RFC destinations in all the systems. You can choose the new password in a way that follows the new stringent password rules, and it won't be changed by automatically by the system.
 Should the system landscape be large then we can automate the process.

  1. For this we implement SAP note # 1414256 (for releases <= 640 manual steps in SAP note # 761637 must still be applied)
    The note 
    1414256 contains report TMS_UPDATE_PWD_OF_TMSADM which must be run in the DC (domain controller) client 000. It should be noted that this report in itself does not support domain links.
  2. Should domain links exist then use SAP # note 1515926. The note should be applied to all systems of the connected domains. Once the note is applied start the report that is described in Note 1414256 on all of the domain controllers of the connected domains. That means executing TMS_UPDATE_PWD_OF_TMSADM in client 000 on all domain controllers.

Shell script House keeping/Ringout /Move/Remove file older than - day

** Myself 

#  Move file (BC_BC_XMB* filename) older than 100 day from /sapmnt/EPP/globle/ to /backup/BC_BC_XMB_ARCHIVE
find /sapmnt/EPP/global/BC_BC_XMB*  -type f -mtime +100 -exec mv {} /backup/BC_BC_XMB_ARCHIVE \;
#  Remove file older than 200 day in /backup/BC_BC_XMB_ARCHIVE
find /backup/BC_BC_XMB_ARCHIVE/BC_BC_XMB*  -type f -mtime +200 -exec rm {} \;

Problem FTP(VSFTP Put/get file with permission 077(rw-------) or 022 (rw-r--r--) to 002(rw-rw-r--)

** Myself

VSFTP Default Permission to put/get default file to 002 rw-------

1.) edit vsftpd.conf and change local_umask to 002

vi /etc/vsftpd.conf

#
# Default umask for local users is 077. You may wish to change this to 022,
# if your users expect that (022 is used by most other ftpd's)
#
local_umask=002
#
# Uncomment to put local users in a chroot() jail in their home directory
# after login.
#
#chroot_local_user=YES

2.)restart vsftpd
 /etc/init.d/vsftpd restart

Linux high memory usage and Reducing cached memory usage


sync; echo 3 > /proc/sys/vm/drop_caches
echo "vm.drop_caches = 3" >> /etc/sysctl.conf"
sync

Writing to this will cause the kernel to drop clean caches, dentries and inodes from memory, causing that memory to become free.
To free pagecache:
  • echo 1 > /proc/sys/vm/drop_caches
To free dentries and inodes:
  • echo 2 > /proc/sys/vm/drop_caches
To free pagecache, dentries and inodes:
  • echo 3 > /proc/sys/vm/drop_caches
As this is a non-destructive operation, and dirty objects are not freeable, the user should run "sync" first in order to make sure all cached objects are freed.
This tunable was added in ~2.6.16+ kernel,
Memory
To see how much memory you have,
free -m
Note. on this one look at the +/- buffers line which ommits freacking caching.
top
cat /proc/meminfo
dmesg | grep -i mem
vmstat

Disable Linux memory cache on ~2.6.16+ kernel,
sync
cat /proc/sys/vm/drop_caches
#sync;echo 3 > /proc/sys/vm/drop_caches
sysctl vm.drop_caches=3
echo "vm.drop_caches = 3" >> /etc/sysctl.conf"
sync

SAP BI BEx Analyzer Error IWB_HTML_HELP_URL_GET

** Myself
BEx Analyzer Error  IWB_HTML_HELP_URL_GET


[Client can be access help.sap.com]

sr13-> PlainHtmlHttp

Variant:

Documentation



Platform:

WN32



Area:

IWBHELP



Server Names:



Path:




Language:
EN


Save-> Test connect  BEx Analyzer again.

Step by Step Single Node Cluster OpenAIS Suse 11 with Oracle11g & SAPInst

**Myself
OpenAIS Cluster Environment
Cluster information
host01 information :
-          hostname=host01
-          ip= 172.10.10.10
-          virtual IP=172.10.10.11
-          heartbeat ip=No
-          eth0=172.10.10.10 (data no heartbeat)
-          eth1=Not use ,
-          vgshare=qasdatavg
lv=lvol1,lvol2
-          Oracle sid = QAS [Mcod csd,csq,bcd,bcq,j2d,j2q,ecd,ecq]
-          SAP sid = j2d,j2q,trex_90,trex_92
-          disk on qasdatavg= md0/md1/md2/md3/md4

Create OpenAIS cluster  
  1. Create a shared volume group.
  2. Create a OpenAIS Cluster.
  3. Create an application package.
  4. Dependency or Constraints of Resource cluster startup.
  5. Maintain cluster.

1.     Create a shared volume group


1.       Install multipath and configuration mpio/lvm

 (SLES 9 only) Changing system configuration
Using an editor of your choice, within /etc/sysconfig/hotplug set this value:
HOTPLUG_USE_SUBFS=no
On SLES:
rpm -ivh multipath-tools-0.X.X-XX.X.i586.rpm
On Red Hat:
rpm -ivh device-mapper-multipath-0.X.X-XX.elX.x86_64.rpm
- Load driver for the HBA is added to INITRD_MODULES in /etc/sysconfig/kernel
INITRD_MODULES="cciss reiserfs piix megaraid_sas mptspi siimage processor thermal fan jbd ext3 dm_mod edd dm-multipath qla2xxx"
- Ensure that boot.multipath and multipathd are set to start on boot: MPIO Services
chkconfig boot.multipath on
chkconfig multipathd on

The services can be started immediately with: 
/etc/init.d/boot.multipath start
/etc/init.d/multipathd start

To verify 
chkconfig --list | grep multipath

- Configuring multipath-tools  /etc/multipath.conf.
“failback 60”  # incase san not stable
Suse if the /etc/multipath.conf file does not exist, copy the example to create the file:
cp /usr/share/doc/packages/multipathtools/multipath.conf.synthetic /etc/multipath.conf
          Edit Backlist
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st|sda)[0-9]*"
devnode "^hd[a-z][0-9]*"
devnode "^cciss!c[0-9]d[0-9].*"
}

- MD RAID Preconfiguration
chkconfig boot.md off
It is still possible to use local MD devices. These can be configured in the file /etc/mdadm.conf.localdevices,
which uses the same syntax as the /etc/mdadm.conf. The cluster tools RPM package contains a new initscript called boot.md-localdevices. Copy this file to the /etc/init.d directory and enable it using the command
chkconfig boot.md-localdevices on
- Using LVM2 on top of the MPIO devices
           modify/etc/lvm/lvm.conf

filter = [ "a|/dev/sda[1-4]|", "a|/dev/md.*|", "r|/dev/.*|" ]

- Reboot


2 Create md Configuration

Check DM multipath
 multipath -l

600507680281816384000000000000dc dm-27 ABC,1234
size=200G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 3:0:0:4 sdf 8:80  active ready running
| `- 4:0:0:4 sdp 8:240 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
  |- 3:0:1:4 sdk 8:160 active ready running
  `- 4:0:1:4 sdu 65:64 active ready running
3600507680281816384000000000000cc dm-26 ABC,1234
size=200G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 3:0:1:3 sdj 8:144 active ready running
| `- 4:0:1:3 sdt 65:48 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
  |- 3:0:0:3 sde 8:64  active ready running
  `- 4:0:0:3 sdo 8:224 active ready running
3600507680281816384000000000000cb dm-25 ABC,1234
size=200G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 3:0:0:2 sdd 8:48  active ready running
| `- 4:0:0:2 sdn 8:208 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
  |- 3:0:1:2 sdi 8:128 active ready running
  `- 4:0:1:2 sds 65:32 active ready running
3600507680281816384000000000000ca dm-23 ABC,1234
size=200G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 3:0:1:1 sdh 8:112 active ready running
| `- 4:0:1:1 sdr 65:16 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
  |- 3:0:0:1 sdc 8:32  active ready running
  `- 4:0:0:1 sdm 8:192 active ready running
3600507680281816384000000000000c9 dm-24 ABC,1234
size=200G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 3:0:0:0 sdb 8:16  active ready running
| `- 4:0:0:0 sdl 8:176 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
  |- 3:0:1:0 sdg 8:96  active ready running
  `- 4:0:1:0 sdq 65:0  active ready runnin



#Create MD arrays using mdadm.


mdadm --create /dev/md0 --raid-devices=1 --level=0
--metadata=1.2 --force /dev/mapper/3600507680281816384000000000000c9

mdadm --create /dev/md1 --raid-devices=1 --level=0
--metadata=1.2 --force /dev/mapper/3600507680281816384000000000000ca

mdadm --create /dev/md2 --raid-devices=1 --level=0
--metadata=1.2 --force /dev/mapper/3600507680281816384000000000000cb
mdadm --create /dev/md3 --raid-devices=1 --level=0
--metadata=1.2 --force /dev/mapper/3600507680281816384000000000000cc
mdadm --create /dev/md4 --raid-devices=1 --level=0
--metadata=1.2 --force /dev/mapper/3600507680281816384000000000000dc

#Create md for reboot /clusterconf/QAS/mdadm.conf

ARRAY /dev/md0 UUID=cb6443ea:7a85f171:3dfca667:723eb9c9
ARRAY /dev/md1 UUID=fd640129:153c4dd5:0a7b2596:44c1584f
ARRAY /dev/md2 UUID=e8c6b9d2:14f71cee:f27dc725:c452ea1e
ARRAY /dev/md3 UUID=8158487c:5ea824ee:fe01fca9:32dc5cbd
ARRAY /dev/md4 UUID=2e152c68:2f72bfb8:4373b348:1164ddc6

#manually start and stop md devices like this:
# to stop md scan
mdadm --stop -scan 
# to verify stop md
more /proc/mdstat 

mdadm --detail /dev/md0



for DEVICE in /dev/md0 /dev/md1 /dev/md2 /dev/md3 /dev/md4; do
mdadm --assemble "${DEVICE}" --config=/clusterconf/QAS/mdadm.conf; done;
 
## to scan md ->  mdadm --assemble --scan
 
 

3. create physical volumn
for i in 0 1 2 3 4
do
pvcreate -f /dev/md$i
done
 
 

4. create vg
vgcreate /dev/qasdatavg /dev/md0 /dev/md1 /dev/md2 /dev/md3 /dev/md4
 

5. create lv
lvcreate -L 20G -n lvorahomeQAS /dev/qasdatavg
lvcreate -L 1G -n lvQASoriglogA /dev/qasdatavg
lvcreate -L 1G -n lvQASoriglogB /dev/qasdatavg
lvcreate -L 1G -n lvQASmirrlogA /dev/qasdatavg
lvcreate -L 1G -n lvQASmirrlogB /dev/qasdatavg
lvcreate -L 30G -n lvQASoraarch /dev/qasdatavg
lvcreate -L 130G -n lvQASsapdata1 /dev/qasdatavg
lvcreate -L 130G -n lvQASsapdata2 /dev/qasdatavg
lvcreate -L 130G -n lvQASsapdata3 /dev/qasdatavg
lvcreate -L 130G -n lvQASsapdata4 /dev/qasdatavg

6. create file system
mkfs.ext3  /dev/qasdatavg/lvorahomeQAS 
mkfs.ext3 -b 1024 /dev/qasdatavg/lvQASoriglogA 
mkfs.ext3 -b 1024 /dev/qasdatavg/lvQASoriglogB 
mkfs.ext3 -b 1024  /dev/qasdatavg/lvQASmirrlogA 
mkfs.ext3 -b 1024  /dev/qasdatavg/lvQASmirrlogB 
mkfs.ext3   /dev/qasdatavg/lvQASoraarch 
mkfs.ext3   /dev/qasdatavg/lvQASsapdata1 
mkfs.ext3   /dev/qasdatavg/lvQASsapdata2 
mkfs.ext3  /dev/qasdatavg/lvQASsapdata3 
mkfs.ext3  /dev/qasdatavg/lvQASsapdata4
 
#verify blocksize
dumpe2fs -h /dev/mapper/qasdatavg-lvQASmirrlogA |grep "Block size"
 
dumpe2fs -h /dev/mapper/qasdatavg-lvQASoraarch |grep "Block size"

7. create layout fs
mkdir -p /oracle/QAS
mkdir -p /oracle/QAS/sapdata1
mkdir -p /oracle/QAS/sapdata2
mkdir -p /oracle/QAS/sapdata3
mkdir -p /oracle/QAS/sapdata4
mkdir -p /oracle/QAS/origlogA
mkdir -p /oracle/QAS/origlogB
mkdir -p /oracle/QAS/mirrlogA
mkdir -p /oracle/QAS/mirrlogB
mkdir -p /oracle/QAS/oraarch
 


2.
   
Create a OpenAIS Cluster


Loading