Get Phyisical RAM on HP-UX

in HPUX to get the phyisical / real memory, you can do one of the following:

grep Physical /var/adm/syslog/syslog.log


Replacing a Mirrored HP-UX Boot Disk

Replacing a Mirrored HPUX Boot Disk

Reduce any logical volumes that have mirror copies on the faulty disk so that they no longer mirror onto that disk. (note: lvdisplay -v /dev/vgXX/lvol* will show the lvols)

# lvreduce -m 0 /dev/vgXX/lvolX /dev/dsk/cXtXd0 (for 1 way mirroring)

Reduce the volume group.

# vgreduce /dev/vgXX /dev/dsk/cXtXd0

Stop I/O's going to drive

# pvchange -a n /dev/dsk/c0t2d0

---> Replace the drive.

# pvchange -a y /dev/dsk/c0t2d0

Initialize the disk for LVM.

# pvcreate -f –B /dev/rdsk/cXtXd0

Set boot switch for no quorum and add offline diagnostics (if available to drive)

# mkboot –a "boot vmunix –lq" /dev/dsk/c0t2d0

# mkboot –b /usr/sbin/diag/lif/updatediaglif2 –p ISL –p HPUX –p LABEL –p AUTO /dev/rdsk/cXXtXd0.

Extend the volume group.

# vgextend /dev/vgXX /dev/cXtXd0

Lvextend the mirrors back onto the replaced drive.

# lvextend -m 1 /dev/vgXX/lvolX /dev/dsk/cXtXd0 & (for 1 way mirroring) Do this for each lvol on the system. The & allows you to run the task in the background. You can check on the progress using the lvdisplay –v /dev/vg00/lvolXX.

After running the mkboot and lvextend commands, do an lvlnboot -Rv to relink the disk into the

Boot Data Reserved Area of all the physical volumes in the volume group.

# lvlnboot -Rv


allowing backspace and @ in login prompt on HPUX

The backspace, @ (at sign), and # (pound character) don't work at the login prompt on HPUX systems. In HP-UX the default kill and erase values are set to @ and # respectively, you can change them via /dev/ttyconf by creating a custom startup script. See the following ITRC threads and the section labeled "Control Character Default Assignments" on the man page for stty: http://docs.hp.com/en/B2355-60127/stty.1.html

See termio(7) for the default values of control characters: http://docs.hp.com/en/B2355-60127/termio.7.html

like this: stty erase ^H kill ^U intr ^C susp ^Z < /dev/ttyconf


using WS_FTP to automate file xfers

Some notes:

  • Many options in WS_FTP are now user-specific, i.e. the changes made to the options are only reflected in that current user's profile.

  • PGP support is now included in V9, but I'd rather do the encryption/decryption outside of ws_ftp and use the 'gpg' tool instead. By default, ws_ftp will try to decrypt and verify every file downloaded with a .pgp extension. This option must be turned off. Go into the options under the PGP section and uncheck the "Always decrypt & verify encrypted and signed".

  • If WS_FTP needs to be reinstalled, make sure you go in and out of the application and verify the options that you changed are still intact.

  • I had to connect to each site to make sure the connections were still good. In the process of doing so I needed to click "Trust this connection" on any of the SSH connections because a new fingerprint was generated.

  • Each user profile has a registry setting pointing to the application data directory. After the upgrade this initially pointed to the user's specific profile, so we had to change it back to the following:


"DataDir"="C:\\Documents and Settings\\All Users\\Application Data\\Ipswitch\\WS_FTP"

  • Since we typically use the 'local_assigned' user account and he's not an admin on windowBox1, we needed to change some registry permissions in order to allow some options to be read and customized. We changed the security on the following registry key:

[HKEY_LOCAL_MACHINE\SOFTWARE\Ipswitch] -- added WindowsGroup1 group with 'Full Control'

  • The speed of uploads on SSH servers dropped in half in v.9.01. Add "sftppipefraction=150" to the sftp/ssh site's section in ws_ftp.ini to gain full speed. Without this, every lifemasters transfer failed.

WS_FTP Pro notes v8: (some of this applies to v9 as well)



During the install it will ask about "shared" and "personal" sites. The only option we want selected is the "Allow users to create or modify shared sites". The reason for this is that it simplifies where the site data is held. Shared sites go into the "all users" profile directory. "Personal" sites go to the individual user profile, which means if switching from one user account to another, the ws_ftp.ini file would have to be copied over as well. The registry settings this affects are recorded below.


"DataDir"="C:\\Documents and Settings\\All Users\\Application Data\\Ipswitch\\WS_FTP"



Directories and files:

shared sites (shared by all users) stored in:

C:\Documents and Settings\All Users\Application Data\Ipswitch\WS_FTP\Sites\ws_ftp.ini

Predefined Sites stored in: predef.ini

MySites are stored in: original.ini

Any folders created will generate an .ini file based on the folder name. The ini file will contain any sites created under that folder.

Log files stored (usually over-wrote each time WS_FTP is called from the command line):

C:\Documents and Settings\All Users\Application Data\Ipswitch\WS_FTP\Logs

You can copy sections of an .ini file from one computer to another but the password may have to be re-entered on the destination computer because of the way WS_FTP encrypts the password field.

Must use fully qualified path because the command line instance will bring us to the root directory. Otherwise it will default to /usr/bin on some systems.

File specifications for uploading/downloading can be wildcarded. ( * or . )

command line example

NOTE: Trailing slash must be used for destination directory....

cd "%programfiles%\ws_ftp pro" &

wsftppro -s ftp://anonymous:test@ftp.ipswitch.com/pub/msdos/vmenu.zip -d local:c:\


cd "%programfiles%\ws_ftp pro" & wsftppro -s amisys:~/bin/list.sh -d local:c:\ -ascii


cd "%programfiles%\ws_ftp pro" & wsftppro -s amisys:~/bin/list.sh -d local:c:\ -binary


cd "%programfiles%\ws_ftp pro\wsftppro" & wsftppro -s amisys:~/bin/list.sh -d local:c:\

-lower lowercases the filename (only works when uploading TO a remote host, not downloading from a host)

WARNING: Using the command line, be careful that when downloading, the remote file name must be in the EXACT case in order to download the correct file.

The WS_FTP scheduler is just another front-end to the Win2k task scheduler.

GNUpg, gpg, encryption notes

Public Key Cryptography:

http://www.wvu.edu/~lawfac/mmcdiarmid/digital%20signatures.htm - older reference, but pretty easy to understand

http://www.lugod.org/presentations/pgp/ - good introduction and beginners guide

http://www.linuxjournal.com/article.mydivision?sid=4828 GPG the Best Free Crypto You Aren't Using, Part I of II

http://www.linuxjournal.com/article.mydivision?sid=4892 GPG the Best Free Crypto You Aren't Using, Part II of II



gpg usage: http://www.rhce2b.com/clublinux/RHCE-38.shtml

GPG notes:

http://www.gnupg.org/ - GNU Privacy Guard

If the --output option is not specified, gpg will usually write contents to stdout (the screen). You can also do file redirection to route the output to a file. The exception to this is the default decryption option:

gpg [filename]

The above syntax will decrypt the file to the original unencrypted filename. You can add other options to this command.

You encrypt with someone's public key, they decrypt with their secret key. Give your public key out to those that want to send you encrypted files/messages. Then only you (or anyone that has your secret key, which should be no one but you) can decrypt and view the file.

[name] = name, email or identifier of key. email addr is usually the best one to use because it's usually the most unique identifier.


gpg.man -- man page for gpg (lists all switches)

gpg.conf -- found in c:\gnupg (see readme.w32), contains all config options

If you receive the following when decrypting a file, then there is probably a compatiblity problem with the other user's signature, usually nothing to worry about: "WARNING: message was not integrity protected". To prevent the message from appearing use the --no-mdc-warning in the gpg command line or put the following in the gpg.conf file: no-mdc-warning

On the Windows platform, be sure to include the following option in gpg.conf or on the command line:


The Windows version of GnuPG replaces the exten­

sion of an output filename to avoid problems

with filenames containing more than one dot.

This is not necessary for newer Windows versions

and so --no-mangle-dos-filenames can be used to

switch this feature off and have GnuPG append

the new extension. This option has no effect on

non-Windows platforms.

NOTE: Any options specified in the configuration file(gpg.conf) should NOT have the double dashes at the beginning of them.

gpg.conf example file:



load-extension lib\idea

Generating a new key pair:

gpg --gen-key

The default way we have been creating the keys is:

kind of key you want: (1) DSA and ElGamal (default)

keysize: 2048

expiration: 0 -does not expire

"Real name": mycompany

email: mydivision-{vendor}@mycompany.com where {vendor} is the vendor's name.

comment: mydivision - mycompany (usually)

Displaying/listing keys:

list all secret keys on the system:

gpg --list-secret-keys

list all public keys on the system:

gpg --list-keys

Importing keys:

to import an exported public or secret key into the appropriate keyring on this system:

gpg --import keyfile_to_import


use --armor option if sending key via email or if vendor requires ASCII armored data.

to export a public key; don't specify a name if you want to export all:

gpg --output filename.key --export [name]

to export a secret key; don't specify a name if you want to export all:

gpg --output filename.key --export-secret-keys [name]

Always export any keys before using them. This keeps a backup of all keys in case you screw up (you probably will too!). You can use the following as a template, replacing {vendor} with the vendor's name, and paste the text directly to the shell.

gpg --output mydivision-{vendor}@mycompany.com-public.asc --armor --export mydivision-{vendor}@mycompany.com

gpg --output mydivision-{vendor}@mycompany.com-public.key --export mydivision-{vendor}@mycompany.com

gpg --output mydivision-{vendor}@mycompany.com-secret.asc --armor --export-secret-keys mydivision-{vendor}@mycompany.com

gpg --output mydivision-{vendor}@mycompany.com-secret.key --export-secret-keys mydivision-{vendor}@mycompany.com

NOTE: In some cases the vendor can't use certain algorithms such as AES192, AES256, etc. In these cases you will need to edit the key after generating it and export the key in order to disable or restrict use of the particular "problematic" algorithms. Instructions below:

gpg --edit-key [name]

setpref S3 S2 S1 H2 H3 Z2 Z1 (This string was used for Express-Scripts because of their requirment)

(do a setpref xx xx; or whatever algorithms/options you want included. Include all options except the ones you want to disable.)



List the algorithms/options on the key:

gpg --edit-key keyid showpref quit (long verbose format)

gpg --edit-key keyid pref quit (short terse format)

List of options/preferences to use on keys:

s2 = 3des

s3 = cast5

s4 = blowfish

s7 = aes

s8 = aes192

s9 = aes256

s10 = twofish

s1 = idea (if you use it, otherwise leave out)

h3 = ripemd160

h2 = sha1

h1 = md5

z2 = zlib

z1 = zip

z0 = no compression


When encrypting a file, you can use multiple -r (recipient) options if needed. To decrypt the file, the secret key pair that corresponds to the public key used to encrypt the file will be needed.

The following example will create an encrypted file with a .gpg extension.

gpg -r info@claimsnet.com --encrypt-files encrypt-test.txt (preferred method)

gpg --ouput ouput_filename -r [name] --encrypt filename_to_encrypt

for interactive prompt asking which key to use to encrypt:

gpg --ouput ouput_filename --encrypt filename_to_encrypt

When using a key to encrypt for the very first time, you will see text similar to the following:

gpg: checking the trustdb

gpg: checking at depth 0 signed=1 ot(-/q/n/m/f/u)=0/0/0/0/0/7

gpg: checking at depth 1 signed=0 ot(-/q/n/m/f/u)=1/0/0/0/0/0


to decrypt a file: (must have secret key that matches the public key that was used to encrypt the file

gpg filename_to_decrypt -- decrypt file and write to original filename (preferred method)


gpg --ouput output_filename --decrypt filename_to_decrypt

if you don't have the secret key for an encrypted file you'll get the error: "gpg: decryption failed: secret key not available"

signing a key:

gpg --local-user mydivision-claimsnet@mycompany.com --sign-key info@claimsnet.com

-- it will ask for level of trust. Choose the highest level of trust. (3)

after you receive someone's public key (whom you trust) you can sign it. If you don't you'll get the following message every time you try to encrypt something with their public key:

gpg --output encrypt-test.pgp -r info@claimsnet.com --encrypt encrypt-test.txt

gpg: C458F397: There is no indication that this key really belongs to the owner

1024g/C458F397 2001-02-28 "Claimsnet.com Inc. <info@claimsnet.com>"

Primary key fingerprint: 1254 FD28 5BF7 DF69 CD02 9072 4155 8840 575F 950E

Subkey fingerprint: 0164 29BF CEB1 96B6 91AC FF76 CE41 BAAA C458 F397

It is NOT certain that the key belongs to the person named

in the user ID. If you *really* know what you are doing,

you may answer the next question with yes

Use this key anyway? n

gpg: encrypt-test.txt: encryption failed: unusable public key

After you sign the recipient's key when encrypting a file you won't get the error message.

marking keys as trusted (need to do this when we import our keys into new keyring file):

gpg --edit-key mydivision-esi@mycompany.com

Command> trust

Your decision? 5

Do you really want to set this key to ultimate trust? y

Command> quit

Changing the passphrase of the secret key (in case of lost/stolen key):

gpg --editkey mydivision-esi@mycompany.com

Command> passwd

Enter passphrase: ****

Enter the new passphrase for this secret key.

Enter passphrase: *******

Repeat passphrase:*******

Command> save

Deleting/removing keys no longer needed:

Recommend exporting the keys first before deleting them.

delete secret key:

gpg --delete-secret-keys [name]

delete a public key:

gpg --delete-keys [name]

delete both secret and public key pair:

gpg --delete-secret-and-public-key [name]

--delete-secret-and-public-key name

Same as --delete-key, but if a secret key

exists, it will be removed first. In batch mode

the key must be specified by fingerprint.


to automate/batch decrypt files use the following options. MAKE SURE that the gnupg directory is secured well and keep the "passphrase-file" in the same directory or another secure directory.:

--passphrase-fd n

Read the passphrase from file descriptor n. If

you use 0 for n, the passphrase will be read

from stdin. This can only be used if only

one passphrase is supplied. Don't use this

option if you can avoid it.


type passphrase-file | gpg --passphrase-fd 0 [filename_to_decrypt]


Signatures are basically good for verifying the authenticity of message/file/whatever.

clearsign (good for emailing), example:

hp.txt contents (in courier new font) before signing:

This is a test file...

gpg --local-user mydivision-abf@mycompany.com --clearsign hp.txt

after signing it will create a file named: hp.txt.asc:


Hash: SHA1

This is a test file...


Version: GnuPG v1.2.4 (MingW32)





verify signature (must have public key in keyring to do this):

gpg --verify hp.txt

in the case of a detached signature, by putting the signature file first:

gpg --verify file.sig file

Log of session running of gen-key (bold ours):

C:\>gpg --gen-key

gpg (GnuPG) 1.2.4; Copyright (C) 2003 Free Software Foundation, Inc.

This program comes with ABSOLUTELY NO WARRANTY.

This is free software, and you are welcome to redistribute it

under certain conditions. See the file COPYING for details.

Please select what kind of key you want:

(1) DSA and ElGamal (default)

(2) DSA (sign only)

(4) RSA (sign only)

Your selection? 1

DSA keypair will have 1024 bits.

About to generate a new ELG-E keypair.

minimum keysize is 768 bits

default keysize is 1024 bits

highest suggested keysize is 2048 bits

What keysize do you want? (1024) 2048

Requested keysize is 2048 bits

Please specify how long the key should be valid.

0 = key does not expire

<n> = key expires in n days

<n>w = key expires in n weeks

<n>m = key expires in n months

<n>y = key expires in n years

Key is valid for? (0) 0

Key does not expire at all

Is this correct (y/n)? y

You need a User-ID to identify your key; the software constructs the user id

from Real Name, Comment and Email Address in this form:

"Heinrich Heine (Der Dichter) <heinrichh@duesseldorf.de>"

Real name: mycompany

Email address: mydivision-mckession@mycompany.com

Comment: mydivision - mycompany

You selected this USER-ID:

"mycompany (mydivision - mycompany ) <mydivision-mckession@mycompany.com>"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? e

Email address: mydivision-othercorp@mycompany.com

You selected this USER-ID:

"mycompany (mydivision - mycompany ) <mydivision-othercorp@mycompany.com>"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? o

You need a Passphrase to protect your secret key.

We need to generate a lot of random bytes. It is a good idea to perform

some other action (type on the keyboard, move the mouse, utilize the

disks) during the prime generation; this gives the random number

generator a better chance to gain enough entropy.






We need to generate a lot of random bytes. It is a good idea to perform

some other action (type on the keyboard, move the mouse, utilize the

disks) during the prime generation; this gives the random number

generator a better chance to gain enough entropy.





public and secret key created and signed.

key marked as ultimately trusted.

pub 1024D/51CF3CC5 2004-04-23 mycompany (mydivision - mycompany ) <mydivision-othercorp@p


Key fingerprint = 9982 34EB 114A 6A4D 8EC5 9FB8 98A8 F30F 51CF 3CC5

sub 2048g/DF7989E4 2004-04-23

getting a public key from a keyserver:

gpg --keyserver http://pgp.mit.edu --search-keys dd9jn@gnu.org

Curl notes and usage

cURL notes:

official web sites:

curl: http://curl.haxx.se


openssl: http://www.openssl.org

Curl is an open source command line tool for transferring files with URL syntax, supporting FTP, FTPS, HTTP, HTTPS, GOPHER, TELNET, DICT, FILE and LDAP. Curl supports HTTPS certificates, HTTP POST, HTTP PUT, FTP uploading, kerberos, HTTP form based upload, proxies, cookies, user+password authentication, file transfer resume, http proxy tunneling and a busload of other useful tricks. Openssl handles the encryption part of the transfer.

Curl usage:

TO use config file ( _culrc ) the next line must be set in the batch file or scripting environment.

set home=c:\directory_pathto_config_file

Since curl will only work with PEM formatted certificates, we need to convert the PKCS12 format certificate:

openssl pkcs12 -in [original certificate file] -clcerts -out [PEMfile]

I believe some web sites use cookies for the session timeout (I believe a 5-10 minute timeout period). For that reason when initially communicating with the server we MUST run 2 passes of curl, 1 to authenticate and 1 to do the transfer or whatever other command we want to issue.

Using config file:

list directory:

curl https://this.secretwebsite.com/* -- will list everything recursively; lists just file/path names

curl https://this.secretwebsite.com -- is like doing an ls -l


curl -O https://this.secretwebsite.com/outbound/SFT_Win32_4.0.39_Guide.doc


curl -T mylocalfile.txt https://this.secretwebsite.com/

files we get back: mylocalfile.txt_201706.NO_ADD_REC

that's HHMMSS (maybe pacific timezone)

Deleting files:

curl -X DELETE https://this.secretwebsite.com/mylocalfile.txt_201706.NO_ADD_REC

from manual-

-X/--request <command>

(HTTP) Specifies a custom request to use when com­

municating with the HTTP server. The specified

request will be used instead of the standard GET.

Read the HTTP 1.1 specification for details and


_culrc (config file) contents:

#cookie send file (-b)

cookie C:\sys\temp\curl\cookie.jar

#cookie receive file (-c)

cookie-jar C:\sys\temp\curl\cookie.jar

#redirect to different location (-L)


#certificate file and pw for authenication (-E)

cert e:\curl\mysecrets.pem:the_password_goes_here

#display progress bar (-#) instead of default statistics


#Write output to a local file named like the remote file we get.

#(Only the file part of the remote file is used, the path is cut off.)


HP-UX btmp-utmp accounting

/home/maint/bin/acctcleanup.sh ;; runs only via cron on the first day of every month at midnight. It moves all entries in btmp and wtmp to /home/maint/logs/acct with a file name format of: wtmp-monthYEAR or btmp-monthYEAR

see utmp(4)

/var/adm/btmp Bad login database

/var/adm/wtmp Login database


utmp = record of all users logged onto the system.

btmp = bad login entries for each invalid logon attempt

wtmp = record of all logins and logouts.

# cleans up accounting files: /var/adm/wtmp and /var/adm/btmp  Should be run via
# cron at 0:00 the first of every month.
#wtmp contains a record of all logins and logouts
#btmp contains bad login entries for each invalid logon attempt

#if not running under cron then exit
if ! /home/maint/bin/rptree.sh $$ | grep cron >/dev/null; then
 banner executes "only under" cron


#Since we are in a new month, get last month's name
case `date +%B` in
January) month=December;;
February) month=January;;
March) month=February;;
April) month=March;;
May) month=April;;
June) month=May;;
July) month=June;;
August) month=July;;
September) month=August;;
October) month=September;;
November) month=October;;
December) month=November;;

wdate=$log/wtmp-$month`date +%Y.log`
bdate=$log/btmp-$month`date +%Y.log`

$fwtmp < $w > $wdate
cat /dev/null > $w

$fwtmp < $b > $bdate
cat /dev/null > $b

fbackup/frecover tips

Diagnostics: HP Library and Tape Tools (L&TT or LTT)

in: /opt/ltt and the main program: /opt/ltt/hp_ltt

most of our maintenance scripts are contained in:


Our main backup script and associated files are in:

/home/maint/bin/fbackup main script: /home/maint/bin/fbackup/bin/fullback.sh

An email reminder to swap the tape is sent when the backup job is complete. Tapes must be swapped every M-F.

Notes on using fbackup:

fbackup -v -f /dev/rmt/1m -f /dev/rmt/2m -I /indexfile.txt -g graphfile -i include_path -i include_another -c configFile

fbackup graph files do not support wildcards....

graph file contents:

i /include_me

e /exclude_me

If media is write protected you'll see something similar to the following:

fbackup(3032): could not open output file /dev/rmt/2m

default fbackup config used by sam: /etc/sam/br/fbackup_config

fbackup stores incremental backup information in /var/adm/fbackupfiles/dates

fbackup -u option updates the 'dates' files: /var/adm/fbackupfiles/dates

fbackup config file : current config file used in production: /home/maint/fbackup/cfg/best

current config file

description of each line

blocksperrecord 256

+ Number of 1024-byte blocks per record.

records 32

+ Number of records of shared memory to allocate.

checkpointfreq 1024

+ Number of records between checkpoints. Since the EOF marks between checkpoints are also used for fast searching on DLT tape drives, changing the checkpoint frequency may also affect selective recovery speed (see WARNINGS section).

readerprocesses 6

+ Number of file-reader processes.

maxretries 5

+ Maximum number of times fbackup is to retry an active file.

retrylimit 5000000

+ Maximum number of bytes of media to use while retrying the backup of an active file.

maxvoluses 2000

+ Maximum number of times a magnetic tape volume can be used.

filesperfsm 2000

+ The number of files between the fast search marks on DDS tapes. The cost of these marks are negligible in terms of space on the DDS tape. Not all DDS tape devices support fast search marks.

chgvol /home/maint/fbackup/bin/chgvol

+ Name of a file to be executed when a volume change occurs. This file must exist and be executable.

error /home/maint/fbackup/bin/error

+ Name of a file to be executed when a fatal error occurs. This file must exist and be executable.

Clearing fbackup header:

If you wish to clear the fbackup volume header from an fbackup tape because you want to blank out the number of times the tape has been used, use another backup utility on the tape. For example:

tar -cvf /dev/rmt/1m file_to_backup

frecover - recovering files from fbackup tape:

recovering files may take quite a long time, escpecially if they are small files. To restore a small home directory containing less than 18mb took over 10minutes, compared to restoring an 8.5GB file which took only 17 minutes.

Unlike fbackup, single files and wildcarded files(sometimes) may be specified and recovered using frecover. In either fbackup or frecover, the hyphen ( - ) can be used almost anywhere to write/read to/from stdout (standard output). This can be used to pipe commands together as well.

to export the contents (the index) of an fbackup tape:

frecover -f /dev/rmt/1m -I /path/index_file

to write contents to stdout: frecover -f /dev/rmt/1m -I -

to view the volume header ( contains fbackup specific info ):

frecover -f /dev/rmt/1m -V /path/volume_file

test recover (N option): preform the same options, but don't recover the files to disk:

frecover -xvN -f /dev/rmt/1m

recover everything (should only be done in the event of a total system failure):

frecover -v -r -f /dev/rmt/1m


frecover -v -x -f /path_to_fbackup_file

recover all files on tape to the current directory without creating directory structure

frecover -v -x -f /dev/rmt/1m -F

recover all the files in the -i included path to the current directory without creating directory structure

frecover -v -x -f /dev/rmt/1m -F -i /home/maint/fbackup/cfg

recover entire tape contents to current working directory:

frecover -v -x -f /dev/rmt/1m -X

recover graph contents to current working directory:

frecover -v -x -f /dev/rmt/1m -X -g mygraphfile

recover /home/maint/fbackup to current working directory:

frecover -v -x -f /dev/rmt/1m -X -i /home/maint/fbackup/

recover back to orginal file location. File on disk will not be over-written if it's newer than the file from the tape. Use the -o option CAUTIOUSLY to bypass this limitation.

frecover -vxf /dev/rmt/1m -i /home/maint/bin/showuser.sh

frecover option:

-m Print a message each time a file marker is encountered.

Using this option, frecover prints a message each time

either a DDS fast search mark, a filemark (EOF), or a

checkpoint record is read. Although useful primarily for

troubleshooting, these messages can also be used to

reassure the user that the backup is progressing during

long, and otherwise silent, periods during the recovery.


if tape is of unknown format you can extract the contents using pax:

cd to_path_where_extracted_files_should_be_placed

pax -rv -s'/^\///' </dev/rmt/0m

tar - tape archiver:

WARNING: Use -tV to list all the files on the tape before extracting, because tar will not prompt to overwrite and will restore to the fully qualified pathname that is stored in the archive. So when creating a tar archive, please remember to use relative path names and not absolute ones. If no file argument is given, the entire content of the archive is extracted. Note that if several files with the same name are on the archive, the last one overwrites all earlier ones. Wildcards don't work.

tar -cvf /dev/rmt/1m path_to_archive -create a new archive

tar -cvf /dev/rmt/1m -C /home/maint . -create new archive, first change to /home/maint and backup that directory using the relative path (.) Multiple -C options can be used

tar -tVf /dev/rmt/1m -list all files on tape

tar -xvf /dev/rmt/1m - extract all files from tape

tar -xvwf /dev/rmt/1m - extract all files from tape prompting the user to restore each file

tar -xvf /dev/rmt/1m ./index_file - extract the file named index_file into the current directory.


make_tape_recovery: makes a bootable recovery tape

copy_boot_tape: make a copy of a recovery tape

/makerecovery.sh is the custom make_tape_recovery script that creates a recovery tape.

Ejecting a tape:

mt -f /dev/rmt/1mnb offline

Obtaining tape drive status:

mt -f /dev/rmt/2mnb status --show if tape is write protected

st -f /dev/rmt/0mnb -s --limited to displaying if device is OK and ready

Changing foreground / background colors in WRQ Reflection from connected host programmatically

Changing foreground / background colors in WRQ Reflection from connected host programmatically.

Sample perl source: also see /usr/local/bin/*.pl

#!/usr/bin/perl -w

# /usr/local/bin/prodcolors.pl

# Changes colors of foreground/background to: black/white respectively

use warnings;

use strict;

my $code="\e&o2GSub Main

With Application

.SetColorMap rcPlainAttribute, rcBlack, rcWhite

End With

End Sub\e&oH";

print "$code";

<ESC>&o2G Sub Main

Dim c As Integer

c = Val(InputBox$("Enter the sales amount:"))

MsgBox "Your commission is: " & c

End Sub<ESC>&oH

Reflection Basic:

Syntax: object.SetColorMap Attribute, Foreground, Background

.SetColorMap rcInverseAttribute+rcBlinkAttribute, rcGrey, rcBlue


.SetColorMap rcPlainAttribute, rcGrey, rcBlue






rcBoldAttribute (for UNIX and Digital hosts)

rcHalfbrightAttribute (for HP hosts)

rcFunctionKeys (for HP hosts)




















To open/add Reflection Basic macros menu:

Under the Setup menu, click Menu. Then add the Script menu from the "Additional Items/Items from version 6.x" tree to the current menu.

Reflection tech references:

How to Send Reflection Commands from the Host


Index of Reflection Scripting Technical Notes


Other very good references are the manuals/help files in:

C:\Program Files\Reflection\Manuals and C:\Program Files\Reflection\Help\Enu

rbrwin.hlp - Reflection Basic (scripting) Help

HP-UX CIFS (samba server) implementation

CIFS implementation on HPUX box:

Custom made mount at startup files:





CIFS Client configuration file:


log files:


test kerberos ticket:

/opt/cifsclient/bin/cifsgettkt -s windowsbox1

To mount:

$ mount -F cifs buildsys:/source /home/devl/source

To unmount:

$ umount /home/devl/source

mount -F cifs "windowsbox1:/php share" /cifs_mounts/windowsshare

umount /cifs_mounts/windowsshare

cifslogin windowsbox1 myusername

cifslogout windowsbox1

cifslist -A lists servers with shares and mountpoints

cifslist -U lists users in database

cifslist -M lists mounts in database

cifslist -S lists connected servers

cifslist -s <server> lists shares open at server

cifslist -u <server> lists users logged in at server

cifslist -m <share> lists mountpoints for share


clone and Oracle 9i DB

Clone an Oracle 9i database:

In source DB:


  • shutdown immediate

  • Copy data files and redo logs to clone location. aka:

    • for id in 01 02 03 04 05 06 07 08 09 10 11 12

    • do

    • mkdir -p /u${id}/mydb1 ; chown -R oracle:oinstall /u${id}/mydb1 ; cp -p -r /u${id}/prevea2/* /u${id}/mydb1

    • done

  • Copy source db's init{SID}.ora file to init{newSID}.ora ( usually in $ORACLE_HOME/dbs )

In trace file created earlier:

  • Remove second half of file (starts at the section: "Set #2. RESETLOGS case")

  • Remove all comment lines (they begin with #)

  • Remove blank space between -- STANDBY LOGFILE and DATAFILE

  • At top insert: connect / as sysdba

  • Change line: startup nomount to: startup nomount pfile={location of new cloned init file}

  • Change ALL file references from source db location to new clone location.

In the init{newSID}.ora:

  • Change ALL file references from source db location to new clone location.

  • export ORACLE_SID={newSID}

  • Create a new password file: orapwd file=$ORACLE_HOME/dbs/orapw{SID} password=sys entries=1

  • sqlplus /nolog

    • sql> @/b1/mytracefile.trc

    • You'll see 2 message, these are OK to ignore:

      • ORA-00283: recovery session canceled due to errors

      • ORA-00264: no recovery required

    • Test the database: select count(1) from v$database;

    • alter database backup controlfile to trace as '/b1/newtracefile.trc';

    • shutdown immediate

  • Delete all control files in the clone location: *.ctl example:

    • confirm with - find /u[0-9][0-9]/{clonelocation} -name "*.ctl"

    • then delete - find /u[0-9][0-9]/{clonelocation} -name "*.ctl" -exec rm {} \;

In the newly created trace file:

  • Remove second half of file (starts at the section: "Set #2. RESETLOGS case")

  • Remove all comment lines (they begin with #)

  • Remove blank space between -- STANDBY LOGFILE and DATAFILE

  • At top insert: connect / as sysdba

  • Change line: startup nomount to: startup nomount pfile={location of new cloned init file}

  • Change line: create controlfile to: create controlfile set database "{newDBname}" resetlogs noarchivelog

  • Change line: alter datbase open; to: alter database open resetlogs;

In the init{newSID}.ora :

  • change the db_name, instance_name, service_names, dispatchers parameter to the new DB name.

  • sqlplus /nolog

    • sql> @/b1/newtracefile.trc

  • confirm updated dbname: select * from v$database;

for name in arch audit bdump cdump create pfile udump


mkdir -p /u02/srcdb1/${name}


for id in 01 02 03 04 05 06 07 08 09 10 11 12


mkdir -p /u${id}/srcdb1 ; chown -R oracle:dba /u${id} ; chmod -R 775 /u${id}


HP-UX EMS config / tips

HPUX config and tips:

The Event Monitoring Services are a part of the Online Diagnostics. This part is also called HW-Monitoring. The HW-Monitors watch for the Hardware and report via different notification channels to the user or sysadmin.

The Hardware monitors are divided into Event and Status monitoring.

Run the Hardware Monitoring Request Manager by typing:


This tool is stored in /etc/opt/resmon/lbin and has only a character interface. After installation of Event monitoring there are some monitors already configured.

The GUI to configure Hardware Monitor request is used with SAM. When SAM is started there is a entry Resource Management. This entry shows a new submenu with an entry like Event Monitoring Service. This will start the GUI interface.

1 custom hardware Event monitoring request: All events (hardware problems) from all monitors will be sent to the pager group (see /etc/mail/aliases) this is done via monconfig. The Status monitoring requests are configured via SAM (Resource Management/Event Monitoring Service) or via the following command: /opt/resmon/bin/emsui /opt/resmon/bin/EMSconfig.ui The only things that can be monitored via this GUI are: filesystem space available status events and number of jobs in job queues JobQueues status events. We are: monitoring number of jobs in job queues and available MB on filesystems if below 70MB.

• Files Installed - EMS Bundles:





• Startup scripts:

Starts EMS persistence client: /sbin/init.d/ems

based on config file: /etc/rc.config.d/ems

Starts the EMS SNMP subagent: /sbin/init.d/emsa

based on config file: /etc/rc.config.d/emsagtconf

• EMS Daemons started:

At system boot:

Persistence Client: /etc/opt/resmon/lbin/p_client

EMS SNMP subagent: /etc/opt/resmon/lbin/emsagent

At configuration time upon client connect:

Registrar: /etc/opt/resmon/lbin/registrar

p_client (persistence client) is started by init (1M), and respawned by init if it dies.

p_client checks for dead monitors and restarts them (and any outstanding monitoring requests). Default interval is every 2 minutes (interval set in /etc/opt/resmon/config)

p_client runs at system startup to restart monitors and any outstanding monitoring requests

For EMS configuration problems, refer to:


For EMS monitor problems, refer to:


For EMS framework problems, refer to:


These three files are the most interesting files from EMS. But there is also the default log file/var/opt/resmon/event.log. This file will be used for HW Monitoring and default

Notification if Text file is used.

The three logfiles will grow and when they reached about 500kB then they are automatically

moved to <logfile>.old. If the logfile is increased to 500kB it will be moved again.