FDPSTP failed due to ORA-06550 'GATHER_ALL_COLUMN_STATS'

Symptoms
+---------------------------------------------------------------------------+
Application Object Library: Version : 11.5.0

Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.

FNDGACST module: Gather All Column Statistics
+---------------------------------------------------------------------------+

Current system time is 24-SEP-2007 17:00:11

+---------------------------------------------------------------------------+

**Starts**24-SEP-2007 17:00:11
ORACLE error 6550 in FDPSTP

Cause: FDPSTP failed due to ORA-06550: line 1, column 7:
PLS-00307: too many declarations of 'GATHER_ALL_COLUMN_STATS' match this call
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
.

The SQL statement being executed
+---------------------------------------------------------------------------+
Start of log messages from FND_FILE
+---------------------------------------------------------------------------+
+---------------------------------------------------------------------------+
End of log messages from FND_FILE
+---------------------------------------------------------------------------+
Successfully resubmitted concurrent program FNDGACST with request ID 316257 to start at 25-SEP-2007 17:00:00 (ROUTINE=AFPSRS)


+---------------------------------------------------------------------------+
Executing request completion options...


Finished executing request completion options.

+---------------------------------------------------------------------------+
Exceptions posted by this request:
Concurrent Request for "Gather All Column Statistics" has completed with error.

+---------------------------------------------------------------------------+
Concurrent request completed
Current system time is 24-SEP-2007 17:00:11

+---------------------------------------------------------------------------+

Solutions
A) Execute the following workaround:
1.System Administrator, Concurrent > Program > Defile,
2.Query the report and press 'Parameters' button,
3.Add the two missing HMODE and INVALIDATE parameters to definition of FNDGCLST
Ex:
8, Hmode, Hmode, Vset: 30 Characters, Required:N, Security:N, Display:Y
9, Invalidate, Invalidate, 1 char, Required:N, Security:N, Display:Y

Current Patchset Comparison Utility - patchsets.sh

You can find your apps pactch set level.if you want more information click below URL
Download patchesets.sh
ftp://ftp.oracle.com/support/outgoing/PATCHSET_COMPARE_TOOL/patchsets.sh

AD Patch

Applying a patch updates your existing system in various ways, from adding a new
feature or product to improving system performance.

patches are applied for a number of reasons, including:

■ Fixing an existing issue
■ Adding a new feature or functionality
■ Updating to a higher maintenance level
■ Applying the latest product enhancements
■ Providing interoperability to new technology stacks
■ Determining the source of an issue
■ Applying online help

Depending on the type of patch, it may update the file system, or the database, or
both.

Unified Driver File
The unified driver, named u.drv, contains the commands necessary to
change files and database objects, and to generate new objects. It contains copy,
database, and generate portions and performs the copy, database, and generate actions
in the stated order. You typically run the unified driver on all APPL_TOPs. AutoPatch
runs only the actions that are required for the current APPL_TOP.

C,D and G Driver File
C – copy driver
D – database driver
G – forms and reports generation driver

Command Line Patching Utilities
You run the following utilities from the command line.

AutoPatch
AutoPatch is the utility used to apply all patches to the Oracle Applications file system or database.

AD Merge Patch
When you apply patches individually, you must perform patching tasks multiple
times. For example, for every individual patch there may be duplicate link and
generate processes. AD Merge Patch merges multiple patches into a single patch so
that the required patching tasks and processes are performed only once.

i.e. admrgpch -s ./source -d ./target -merge_name merged001

AutoPatch

Use AutoPatch to apply patches to the Oracle Applications file system or database. It
gathers necessary information about your system through a series of prompts. When
you have completed the prompts, AutoPatch performs all the tasks required to apply
the patch, including the following:
■ Reads patch metadata to determine patch dependencies and requirements.
■ Uploads patch information from a prior patch session to the database (if needed).
■ Reads and validates the patch driver file and reads the product driver files.
■ Compares version numbers of object modules from the product libraries and
version numbers of the existing files against the patch files.
■ Backs up all existing files that will be changed by the patch.
■ Copies files.
■ Archives files in libraries.
■ Relinks executables.
■ Generates forms, reports, message, graphics, and Java archive (JAR) files.
■ Compiles JSP files and invalid database objects.
■ Updates database objects.
■ Runs AutoConfig to update configuration files, if any template files are introduced
or updated by the patch.
■ Saves patch information to the database.
AutoPatch takes no action if a patch contains no new updates to files or database
objects in your system. If AutoPatch detects that there is a previously failed AutoPatch session, it will attempt to recover that session.Preparing your System for Patching.

Before you begin a patching session, there are some important tasks you need to
complete.

Enable Maintenance Mode

Before you initiate an AutoPatch session, you should enable maintenance mode and Shutdown concurrent manager. During a maintenance mode downtime, user login is restricted. Users are redirected to a system downtime URL, which informs them that the maintenance session is in progress.

To enable or disable maintenance mode, use the Change Maintenance Mode menu in
AD Administration.

Shut Down Services
If you are applying a patch that updates or relinks files, shut down the corresponding concurrent manager, Web server listeners.

Log File Description
adpatch.log main AutoPatch log file (default name)
adpatch.lgi for AutoPatch informational messages (default name)
adrelink.log for relinking

Command Line Arguments
You can direct the way the AutoPatch operates by adding modifiers to the AutoPatch
start command. These modifiers may be in the form of arguments or options.

$ adpatch logfile=test.log
You can enter more than one token=value argument on a single command line by
separating them with one blank space as in the following AutoPatch command.

$ adpatch printdebug=y flags=hidepw
In some cases, you can include more than one value for a token. In this case, separatethe values with commas.
$ adpatch flags=nohidepw,trace
Comma-separated lists must not contain blank spaces. For example, this command is
not valid:
$ adpatch flags=nohidepw, trace
The following arguments are specific to AutoPatch and can be used to modify and
refine its behavior.

AutoPatch Options
The options= argument is used to pass generic options to AutoPatch. It takes the form
of a comma-separated list. Enter one option or a comma-separated list of options. For
example, options=nocopyportion,nogenerateportion. Do not include a space after the

Option Description
autoconfig Purpose: Tells AutoPatch to run AutoConfig automatically.
Default: autoconfig

Use options=noautoconfig if you are applying a number of patches in
sequence and want to run AutoConfig once, after applying the last patch of
the sequence.

Default: checkfile
Use options=nocheckfile to turn off the checkfile feature.

Default: compiledb for standard patches. nocompiledb for standard patch
translations, documentation patches, and documentation patch translations.
Use options=nocompiledb to save time when multiple non-merged patches are
applied in a maintenance window.

Default: compilejsp for standard patches. nocompilejsp for standard patch
translations, documentation patches, and documentation patch translations.
Use options=nocompilejsp to save time when multiple non-merged patches
are applied in a maintenance window.

Default: copyportion
Use options=nocopyportion to tell AutoPatch not to perform copy actions of
the driver.

Default: databaseportion
Use options=nodatabaseportion to tell AutoPatch not to perform database
actions. of the driver

Default: generateportion
Use options=nogenerateportion to tell AutoPatch not to perform generate
actions of the driver.

Default: nohotpatch
integrity Purpose: Tells AutoPatch whether to verify that the version of each file
referenced in a copy action matches the version present in the patch.

Default: nointegrity
Comments: Using options=nointegrity is safe and avoids some AutoPatch
overhead.

Default: nophtofile
Use options=phtofile to tell AutoPatch not to upload patch history
information to the database.

Interactive or Non-interactive Patching
You can apply patches interactively or non-interactively.

Interactive patching means that you supply basic information that AutoPatch needs by responding to a series of prompts.

Non-interactively substantially reduces the need for user intervention when AutoPatch processes patching tasks. You create a defaults file that contains much of the information you would have supplied at the AutoPatch prompts. Then, when you run AutoPatch, you specify the name of the defaults file, the location of the patch top directory, the name of a driver file, and other parameters on the command line.

Create the defaults file.
Start AutoPatch, using the defaultsfile= argument, and specify the file name and the
path to the defaults file. This creates a defaults file for the current environment.
UNIX:
The file must be under the $APPL_TOP/admin/ directory, where is
the database name (ORACLE_SID/TWO_TASK). For example:

$ adpatch defaultsfile=$APPL_TOP/admin/testdb1/adpatchdef.txt

$ adpatch defaultsfile=$APPL_TOP/admin/testdb1/def.txt logfile=1234567.log \
patchtop=$APPL_TOP/patches/1234567 driver=u1234567.drv workers=3 \
interactive=no

Restarting a Non-interactive AutoPatch Session

When AutoPatch is running non-interactively and encounters an error, it exits to the
operating system and reports a failure. The restart argument is intended specifically
for this circumstance. When AutoPatch sees the restart=yes argument, it assumes that
there is an old session, and expects to find one. If it cannot, it will fail. Do not indicate restart=yes to start a new AutoPatch session.

Complete the following steps:
1. Look through the log files, diagnose the error, and fix it.
2. Use the same command line options that you used initially, but add restart=yes.
UNIX:
$ adpatch defaultsfile=$APPL_TOP/admin/testdb1/def.txt logfile=1234567.log \
patchtop=$APPL_TOP/patches/1234567 driver=u1234567.drv workers=3 \
interactive=no restart=yes
Windows:
C:\> adpatch defaultsfile=%APPL_TOP%\admin\testdb1\def.txt \
logfile=1234567.log patchtop=%APPL_TOP%\patches\1234567 \
driver=u1234567.drv workers=3 interactive=no restart=yes

Abandoning a Non-interactive AutoPatch Session

When you specify interactive=no on the AutoPatch command line, AutoPatch expects
that there is no existing failed session. AutoPatch aborts if it finds restart files from a failed session. Running AutoPatch with the interactive=no and restart=yes command line arguments restarts the previously incomplete session.

To start a completely new AutoPatch session when there is an existing failed session,
specify interactive=no and abandon=yes on the AutoPatch command line. With this
command, AutoPatch deletes the restart files and any leftover database information
from the failed session.

Complete the following steps:
1. Verify that you do not want to restart the previous failed session.
2. Start AutoPatch with the abandon=yes option:
UNIX:
$ adpatch defaultsfile=$APPL_TOP/admin/testdb1/def.txt logfile=7654321.log \
patchtop=$APPL_TOP/patches/7654321 driver=c7654321.drv workers=3 \
interactive=no abandon=yes
Windows:
C:/> adpatch defaultsfile=%APPL_TOP%\admin\testdb1\def.txt \
logfile=7654321.log patchtop=%APPL_TOP%\patches\7654321 \
driver=c7654321.drv workers=3 interactive=no abandon=yes

/* Script to find out Patch level of mini Pack */

Select product_version,patch_level from fnd_product_installations
where patch_level like '%FND%';

I.e.
PATCH_LEVEL

11i.FND.H
11i.AX.I
11i.AK.G
11i.XLA.H
11i.GL.J
11i.FA.O

AD Patches Tables

AD_APPLIED_PATCHES
AD_PATCH_DRIVERS
AD_PATCH_RUNS
AD_APPL_TOPS
AD_RELEASES
AD_FILES
AD_FILE_VERSIONS
AD_PATCH_RUN_BUGS
AD_BUGS
AD_PATCH_COMMON_ACTIONS
AD_PATCH_RUN_BUG_ACTIONS
ad_comprising_patches


Concurrent Manager

FND_CONCURRENT_QUEUES
FND_CONCURRENT_PROGRAMS
FND_CONCURRENT_REQUESTS
FND_CONCURRENT_PROCESSES
FND_CONCURRENT_QUEUE_SIZE


FND Tables

FND_APPL_TOPS
FND_LOGINS
FND_USER
FND_DM_NODES
FND_TNS_ALIASES
FND_NODES
FND_RESPONSIBILITY
FND_DATABASES
FND_UNSUCCESSFUL_LOGINS
FND_LANGUAGES
FND_APPLICATION
FND_PROFILE_OPTION_VALUES


If you want any further clarification, check below metalink documents

Patching Best Practices and Reducing Downtime - Note:225165.1
Oracle Applications Patching FAQ - Note:174436.1
How to Merge Patches Using admrgpch - Note:228779.1

Cluster Commands on Sun Solaris

vxvm
vxprint -ht
su -8 nodes
vxvm -32 nodes
fail over application acive-passivve
load balancing active-activequorom votes =n/2+1=no of nodes
path=/etc/cluster/ccr/infrasturcture
ammesia(starting last in first)
#boot -x non cluster node
#ccradm
hearpeat means keep alive packets
clster version 3.1,3.2,3.0 nafo (network adapter fail over)
ipmp means ip multipathing
did means disk id it is called global devices
command =/usr/cluster/bin
#scdidadm -L (dis device available)
#scdisadm -l (local device)
scdidadm -r (new device add in cluseteR)
#mount -g will mount in an nodes /global min 512mb on each node
path
manpath
#scsetup (menu drivers)
#scconf -p (to see the configuration)
#scstat -q (status of the cluser)
#scconf -p (view ot configuration)
#scstat -W -h primary nodename
#/.rhost
path install cluster=suncluster_3.0/tools/scinstall
..
#scstat
#scconf -p
scdidadm -L
#scconf - transp-jct
#scconf -transp-jct-etherhup
checking node id /etc/cluster/nodeid varlog=/var/cluster/logs/install/scinstall.bg.1374
#scstat -n
reboot --r
#scshutdown -y -g0
#scconf -a -q globaldev=d3
#scconf -a -q reset
#metaset -s disksetname -a -m hostname
#metstat -s diskname
#scstat -q (only quorum)
#scswitch -F -D simvol (online diskgroup)
#scswitch -Z -D simvol -h e3500
#scswitch -S -H E4500
#scswitch -m -D offline DG
#scswitch -r -D restart DG
#scrgadm -a -g share -h e4500,e3400
#scswitch -F -g (resourse GN)
#scswitch -Z -g -h hostname
#scswitch -n -j
#scswitch -e -j
scrgadm -r -j
#scswitch -v -g
#scswitch -o -g
scrgadm -r -g
boot -m milestone=none

Indis localization Patch Tool

1. perl $JA_TOP/inpatch/indpatch.pl drvr_file=5452893.drv fnd_patchset=F appspwd=apps japwd=ja mode=FCD systempwd=manager logfile=$JA_TOP/$APPLLOG/5452893/5452893.log test=n


2. compile - perl compile_il_apps_obj.pl appspwd=apps
3. sqlplus apps/apps @jai_ap_etds_4353842_mig1_apps.sql
4. sqlplus apps/apps @jai_ap_etds_4353842_mig2_apps.sql



1. perl $JA_TOP/inpatch/indpatch.pl drvr_file=4635814.drv fnd_patchset=F appspwd=apps japwd=ja mode=FCD systempwd=manager logfile=$JA_TOP/$APPLLOG/4635814/4635814.log test=n

sqlplus APPS/ @patch/115/sql/ja_in_util_pkg_s.sql
- sqlplus APPS/ @patch/115/sql/ja_in_util_pkg_b.sql




perl $JA_TOP/inpatch/indpatch.pl drvr_file=5498551.drv fnd_patchset=F appspwd=apps japwd=ja mode=FCD systempwd=oracle logfile=$JA_TOP/$APPLLOG/5498551/5498551.log


perl $JA_TOP/inpatch/indpatch.pl .. (On Unix)
c:/> perl %JA_TOP%\inpatch\indpatch.pl .. (On Windows)

adogjf() Unable to generate jar files under APPL_TOP

Failed to generate product JAR files in APPL_TOP -
/apps/dev1/testappl.


adogjf() Unable to generate jar files under APPL_TOP

AutoPatch error:
Failed to generate the product JAR files

AutoPatch error:
Error updating apps.zip


Is this a cloned Instance, if yes Kindly check for the following files
Go through thr Note 261825.1

Fix
In the process of cloning, these files were not copied:
1)$APPL_TOP/admin//out/adcert.txt
2)$APPL_TOP/admin/adsign.txt
3)$APPL_TOP/admin/appltop.cer
4)$HOME/identitydb.obj


1. Execute $APPL_TOP adjkey -initialize

2. Regenerate all jar files using Adadmin utility. $APPL_TOP ADMIN


You should check the file
/apps/dev1/testappl/admin/TEST/log/ADMIN070906.log



3. Now, apply the C-driver again.
AutoPatch may have written informational messages to the file
/apps/dev1/testappl/admin/TEST/log/c2956236.lgi

You should check the file
/apps/dev1/testappl/admin/TEST/log/c2956236.log

1.c2956236.log
2.d2956236.log
3.g2956236.log

exec fnd_aolj_util.getClassVersionfromDB('oracle.apps.inv.transaction.server.BaseTransaction');

exec fnd_aolj_util.getClassVersionfromDB('oracle.apps.inv.keyentities.INVTransaction');

Create Stage Area 11i and 12i

Stage Area

As preparation for running Rapid Install, you must run a Perl script that creates the install directory and copies the contents of the Release 12 software bundle to the appropriate location in the file system.

Software Components

Your installation software comes in DVD format. The individual disks included in the
Release 11i and 12i software bundle are labeled as follows:
• Start Here - Disk 1
• APPL_TOP - Disk n
• RDBMS - Disk n
• Tools - Disk n
• Databases - Disk n

Creating the Stage Area Directory

To create the stage area directory, run the adautostg.pl script. If the script cannot
create the directories, or if there are other system parameters that require modification,
it prompts you to amend the parameters. You must fix any problems before you
continue with the setup process.

Follow these steps to set up a stage area installation:

1. Log in as the operating system user with sufficient privileges to mount, unmount, and eject the DVD. This user must also have write privileges to the stage area that
you set up.
2. Insert the Start Here disk in the DVD-ROM drive.
3. Mount the DVD (conditional).
If your system runs on a UNIX platform, and if you do not use AutoMount, you
must mount the Start Here disk now.
4. Verify software version (perl).
You must have perl 5.0053 or higher installed, and it must be in your PATH.

perl -v

If perl is not installed, you may download it from http://www.perl.com.
5. On UNIX, set the environment variable DISPLAY to an active and authorized
display.
6. Run the adautostg.pl script.
UNIX:
$ cd
$ perl /mnt/cdrom/Disk1/rapidwiz/adautostg.pl

The stage script message informs you that you may either stage all the Rapid Install components, or selected components only.
7. Set up the stage area directory.
At the prompt for the stage directory, enter the name of the system top-level
Directory. The Rapid wizard stage12 directory will be created in this path. For
Example, if you enter /u05 as the top-level directory, the resulting directory path
will be /u05/stage12.
8. Indicate the components to be staged.

The script prompts you to choose the components that you want to stage:

1. Oracle Applications
2. Oracle Database technology stack (RDBMS)
3. Oracle Applications database (Databases)
4. Oracle Applications technology stack (Tools)
5. APPL_TOP

Enter one or more components, separating each one with a space. You can indicate
that you want to stage only the database technology stack, only the APPL_TOP, and
so on.

If you choose 1 (the default), all the main components will be staged.
9. Insert the Rapid Install DVD.
Insert the relevant DVD, as required in the prompt. The system message lists the
files it has copied and their location in the stage area directory.
Stage Area Directory Structure
The stage area created by adautostg.pl looks like this: a top-level directory Stage12, with subdirectories startCD, oraApps, oraDB, oraAS, and oraAppDB.

Starting Rapid Install
Once the stage directory is created, start Rapid Install as described in the following section.
UNIX:
$ cd /u05/Stage12/startCD/Disk1/rapidwiz
$ ./rapidwiz

Oracle Data Pump in Oracle Database 10g

Oracle Data Pump is a newer, faster and more flexible alternative to the "exp" and "imp" utilities used in previous Oracle versions. In addition to basic import and export functionality data pump provides a PL/SQL API and support for external tables.

For the examples to work we must first unlock the SchemaUSER_NAME account and create a directory object it can access:

CONN sys/password@SID_NAME AS SYSDBA
ALTER USER U_NAME IDENTIFIED BY PASSWD ACCOUNT UNLOCK;
GRANT CREATE ANY DIRECTORY TO ;

CREATE OR REPLACE DIRECTORY DIR_NAME AS '/u05/oradata/';

GRANT READ, WRITE ON DIRECTORY DIR_NAME TO SCHEMA_NAME;

Table Exports/Imports

The TABLES parameter is used to identify the tables that are to be exported. The following is an example of the table export and import syntax:

expdp USER_NAME/PASSWD@SID_NAME tables=TABLE_NAME1, TABLE_NAME2, .. n directory=DIR_NAME dumpfile= DUMPFILE_NAME1.dmp logfile=LOG_FILENAME.log

impdp USER_NAME/PASSWD@SID_NAME tables=TABLE_NAME1, TABLE_NAME2, .. n directory=DIR_NAME dumpfile=DUMPFILE_NAME1.dmp logfile=LOG_FILENAME.log

The TABLE_EXISTS_ACTION=APPEND parameter allows data to be imported into existing tables.

Schema Exports/Imports

The OWNER parameter of exp has been replaced by the SCHEMAS parameter which is used to identify the schemas to be exported. The following is an example of the schema export and import syntax:

expdp USER_NAME/PASSWD@SID_NAME schemas=SCHEMA_NAME directory=DIR_NAME dumpfile=INV.dmp logfile=EXPINV.log

impdp USER_NAME/PASSWD@SID_NAME schemas=SCHEMA_NAME directory=DIR_NAME dumpfile=INV.dmp logfile=IMPINV.log

Full Database Exports/Imports

The whole database export is mandatory. The following is an example of the full database export and import syntax:

expdp sys/password@SID_NAME full=Y directory=DIR_NAME dumpfile=FULLDB.dmp logfile=FULLDB10G.log

impdp sys/password@SID_NAME full=Y directory=DIR_NAME dumpfile=FULLDB.dmp logfile=FULLDB10G.log

The DBA_DATAPUMP_JOBS view can be used to monitor the current jobs:

SQL select * from dba_datapump_jobs;

expdp help=y

Keyword Description (Default)
------------------------------------------------------------------------------
ATTACH Attach to existing job, e.g. ATTACH [=job name].
CONTENT Specifies data to unload where the valid keywords are:
(ALL), DATA_ONLY, and METADATA_ONLY.
DIRECTORY Directory object to be used for dumpfiles and logfiles.
DUMPFILE List of destination dump files (expdat.dmp),
e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ESTIMATE Calculate job estimates where the valid keywords are:
(BLOCKS) and STATISTICS.
ESTIMATE_ONLY Calculate job estimates without performing the export.
EXCLUDE Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
FILESIZE Specify the size of each dumpfile in units of bytes.
FLASHBACK_SCN SCN used to set session snapshot back to.
FLASHBACK_TIME Time used to get the SCN closest to the specified time.
FULL Export entire database (N).
HELP Display Help messages (N).
INCLUDE Include specific object types, e.g. INCLUDE=TABLE_DATA.
JOB_NAME Name of export job to create.
LOGFILE Log file name (export.log).
NETWORK_LINK Name of remote database link to the source system.
NOLOGFILE Do not write logfile (N).
PARALLEL Change the number of active workers for current job.
PARFILE Specify parameter file.
QUERY Predicate clause used to export a subset of a table.
SCHEMAS List of schemas to export (login schema).
STATUS Frequency (secs) job status is to be monitored where
the default (0) will show new status when available.
TABLES Identifies a list of tables to export - one schema only.
TABLESPACES Identifies a list of tablespaces to export.
TRANSPORT_FULL_CHECK Verify storage segments of all tables (N).
TRANSPORT_TABLESPACES List of tablespaces from which metadata will be unloaded.
VERSION Version of objects to export where valid keywords are:
(COMPATIBLE), LATEST, or any valid database version.

The following commands are valid while in interactive mode.
Note: abbreviations are allowed

Command Description
------------------------------------------------------------------------------
ADD_FILE Add dumpfile to dumpfile set.
ADD_FILE=dumpfile-name
CONTINUE_CLIENT Return to logging mode. Job will be re-started if idle.
EXIT_CLIENT Quit client session and leave job running.
HELP Summarize interactive commands.
KILL_JOB Detach and delete job.
PARALLEL Change the number of active workers for current job.
PARALLEL=.
START_JOB Start/resume current job.
STATUS Frequency (secs) job status is to be monitored where
the default (0) will show new status when available.
STATUS=[interval]
STOP_JOB Orderly shutdown of job execution and exits the client.
STOP_JOB=IMMEDIATE performs an immediate shutdown of the
Data Pump job.

impdp help=y

Keyword Description (Default)
------------------------------------------------------------------------------
ATTACH Attach to existing job, e.g. ATTACH [=job name].
CONTENT Specifies data to load where the valid keywords are:
(ALL), DATA_ONLY, and METADATA_ONLY.
DIRECTORY Directory object to be used for dump, log, and sql files.
DUMPFILE List of dumpfiles to import from (expdat.dmp),
e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ESTIMATE Calculate job estimates where the valid keywords are:
(BLOCKS) and STATISTICS.
EXCLUDE Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
FLASHBACK_SCN SCN used to set session snapshot back to.
FLASHBACK_TIME Time used to get the SCN closest to the specified time.
FULL Import everything from source (Y).
HELP Display help messages (N).
INCLUDE Include specific object types, e.g. INCLUDE=TABLE_DATA.
JOB_NAME Name of import job to create.
LOGFILE Log file name (import.log).
NETWORK_LINK Name of remote database link to the source system.
NOLOGFILE Do not write logfile.
PARALLEL Change the number of active workers for current job.
PARFILE Specify parameter file.
QUERY Predicate clause used to import a subset of a table.
REMAP_DATAFILE Redefine datafile references in all DDL statements.
REMAP_SCHEMA Objects from one schema are loaded into another schema.
REMAP_TABLESPACE Tablespace object are remapped to another tablespace.
REUSE_DATAFILES Tablespace will be initialized if it already exists (N).
SCHEMAS List of schemas to import.
SKIP_UNUSABLE_INDEXES Skip indexes that were set to the Index Unusable state.
SQLFILE Write all the SQL DDL to a specified file.
STATUS Frequency (secs) job status is to be monitored where
the default (0) will show new status when available.
STREAMS_CONFIGURATION Enable the loading of Streams metadata
TABLE_EXISTS_ACTION Action to take if imported object already exists.
Valid keywords: (SKIP), APPEND, REPLACE and TRUNCATE.
TABLES Identifies a list of tables to import.
TABLESPACES Identifies a list of tablespaces to import.
TRANSFORM Metadata transform to apply (Y/N) to specific objects.
Valid transform keywords: SEGMENT_ATTRIBUTES and STORAGE.
ex. TRANSFORM=SEGMENT_ATTRIBUTES:N:TABLE.
TRANSPORT_DATAFILES List of datafiles to be imported by transportable mode.
TRANSPORT_FULL_CHECK Verify storage segments of all tables (N).
TRANSPORT_TABLESPACES List of tablespaces from which metadata will be loaded.
Only valid in NETWORK_LINK mode import operations.
VERSION Version of objects to export where valid keywords are:
(COMPATIBLE), LATEST, or any valid database version.
Only valid for NETWORK_LINK and SQLFILE.

The following commands are valid while in interactive mode.
Note: abbreviations are allowed

Command Description (Default)
------------------------------------------------------------------------------
CONTINUE_CLIENT Return to logging mode. Job will be re-started if idle.
EXIT_CLIENT Quit client session and leave job running.
HELP Summarize interactive commands.
KILL_JOB Detach and delete job.
PARALLEL Change the number of active workers for current job.
PARALLEL=.
START_JOB Start/resume current job.
START_JOB=SKIP_CURRENT will start the job after skipping
any action which was in progress when job was stopped.
STATUS Frequency (secs) job status is to be monitored where
the default (0) will show new status when available.
STATUS=[interval]
STOP_JOB Orderly shutdown of job execution and exits the client.
STOP_JOB=IMMEDIATE performs an immediate shutdown of the

Procedure to enable SQL trace for users on your database

What is tkprof

tkprof is one of the most helpful utilities available to DBAs for diagnosing performance issues. It essentially formats a trace file into a more readable format for performance analysis. The DBA can then identify and resolve performance issues such as poor SQL, indexing, and wait events.

Analyzing Results

· Compare the number of parses to number of executions.
· Search for SQL statements that do not use bind variables
· Identify those statements that perform full table scans, multiple disk reads, and high CPU consumption.

1. Find the User Dump Directory

SQL> select value from v$parameter where name = 'user_dump_dest';


2. Get the SID and SERIAL# for the process you want to trace.

SQL> select sid, serial# from sys.v_$session
SID SERIAL#
---------- ----------
8 13607

3. Enable tracing for your selected process:

SQL> ALTER SYSTEM SET TIMED_STATISTICS = TRUE;
SQL> execute dbms_system.set_sql_trace_in_session(8,13607, true);

4. Ask user to run just the necessary to demonstrate his problem.

5. Disable tracing for your selected process:

SQL> execute dbms_system.set_sql_trace_in_session(8,13607, false);
SQL> ALTER SYSTEM SET TIMED_STATISTICS = FALSE;

6. Look for trace file in USER_DUMP_DEST

$ cd /app/oracle/admin/oradba/udump
$ ls -ltr
total 8
-rw-r----- 1 oracle dba 2764 Mar 30 12:37 ora_9294.trc

7. Run TKPROF to analyse trace output

$ tkprof ora_9294.trc OUTPUT=ora_9294.lst EXPLAIN=SCHEMA_NAME/PASSWD

8. View/print output

Apps Syntex on adrelink, compile

Compile Forms and Lib

f60gen module=AFMLSUBB.pls userid=apps/apps output_file=AFMLSUBB.plx module_type=library batch=yes compile_all=special
f60gen module=ASOQTLIN.pll userid=apps/apps output_file=ASOQTLIN.plx module_type=library batch=yes compile_all=special
f60gen module=CSXSRISR.fmb userid=apps/accel Output_File=/ora/apps/prodappl/cs/11.5.0/forms/US/CSXSRISR.fmx log=/ora/apps/prodappl/cs/11.5.0/log/CSXSRISR.log
f60gen module=CSXCHTAB.fmb userid=apps/apps Output_File=/oracle/apps/trgappl/cs/11.5.0/forms/US/CSXCHTAB.fmx Module_Type=form Batch=Yes Compile_All=Special


Compile Backage and Backage Body

select 'alter ' || object_type ||' '|| OBJECT_NAME || ' compile ' || ';' from dba_objects where object_type in ('FUNCTION','PACKAGE','PROCEDURE','TRIGGER','VIEW') AND

select 'alter ' || 'PACKAGE '|| OBJECT_NAME || ' compile body' || ';' from dba_objects where object_type in ('PACKAGE BODY') AND STATUS ='INVALID';

Check the Apps file version

strings -a $XXX_TOP/resource/EAMOPMDF.plx | grep '$Header'

strings -a $XXX_TOP/patch/115/sql/EAMWOTHB.pls | grep '$Header'

Manually issue the adrelink command to relink one executable.

i.e - adrelink.sh force=y ranlib=y "FND FNDFS"
adrelink.sh force=y "ad adadmin"

RAC Installation through RAPID Clone

RAC installation:

1. Initially, the Sun team would have finished he clustering between two nodes. This is to ensure that the number of nodes are correct in the installation. You can always add extra nodes to a server. A cluster needs to be setup between the n number of nodes, we will be using. By nodes, here I mean only database nodes.

2. Stage the Oracle Database installation products. It is just a copy command of all the files present here.

3. Installation of Oracle Home:
Here we are installing only the home and not the database. It's because, the basic setup of RAC illustrates a shared storage device which would house all the database files, redo log files, control files and the SP file. The listener specific files and the archivelog files would be present in the specific servers as they are instance specific.

4. In the installation of the home, select "Customised" in the Database configuration.

5. There occurs an entry for the Shared Configuration file.
Here we need to create a file of the name "para_raw_device_file" which would contain the details of all the volume manager of all the dbf files, redo log files, control files and the spfile. Entries would look like this.

rbs03=/dev/vx/rdsk/racdg/rbs03
temp10=/dev/vx/rdsk/racdg/temp10
cntrl01=/dev/vx/rdsk/racdg/cntrl01
cntrl02=/dev/vx/rdsk/racdg/cntrl02
cntrl03=/dev/vx/rdsk/racdg/cntrl03
spfile=/dev/vx/rdsk/racdg/spfile

6. /dev/vx/rdsk/racdg would be the path in the RAW device. This is set up by the hardware team, who would be configuring the RAW device. Here the main idea is to ensure a faster operation of the servers as no file system is mounted on the RAW device. And importantly, before we copy the files to the RAW device, we should have a count of the database files, redo log files and the control files and create them in the RAW device using Veritas Volume Manager. The size of the data files need to pre-determined and then they should be created prior to moving the datafiles to RAW. Another important aspect is that, any file which is going to be present in the RAW device will not have any file extension.

7. So system01.dbf would be system01 and cntrl01.ctl would be cntrl01

8. Now the OUI takes you to listener configuration and the port pool selection. We finish thses processes here and proceed on towards the Manual database creation using dbca.

rdsk contains files of the format
crw------- 1 oracle dba 295, 91019 Oct 10 21:29 system01
crw------- 1 oracle dba 295, 91021 Oct 10 21:29 system02
crw------- 1 oracle dba 295, 91022 Oct 10 21:29 system03

and /dev/vx/dsk contains files of the format
brw------- 1 root root 295, 15000 Sep 8 19:05 archlog

Sun cluster is used to manage clusters between two nodes, while Oracle Cluster is used to manage clusters between two Oracle Instances.

GSD configuration:
Global Services Daemon


go to Oracle home/ bin

$ gsdctl start

Go to the ORacle home/bin and give

$./dbca –datafileDestination /dev/vx/rdsk/racdg/

Now we need to select the Oracle CLuster database.

Dedicated or Shared Server (MTS) - ????


RAC Steps

Preparing for the installation of RAC

Configure the shared disks

Create the directory /var/opt/oracle and set ownership to the oracle user.
If this installation creates the first ORACLE_HOME on your cluster for Real Application Clusters, then set the SRVM_SHARED_CONFIG environment variable to the name of the 100MB volume raw device or shared file that you have pre-created and set up for configuration storage by entering, for example:

setenv SRVM_SHARED_CONFIG shared_filename

Where shared_filename is the name of the configuration raw device file. If you are using a cluster file system, then make sure the partition on which this file resides has least 100MB of free space.

Set the DBCA_RAW_CONFIG environment variable. Do this, for example, by executing the following where filename is the name of the ASCII file containing the configuration information:

setenv DBCA_RAW_CONFIG filename

If this is the first installation on your cluster and you have not set the environment variable, then the Installer prompts you for the name of the 100MB file by displaying the Shared Configuration File Name page

$cd /var/opt/oracle
$ cat srvConfig.loc
srvconfig_loc=/dev/vx/rdsk/racdg/rac_srvconfig_100m

$ SRVM_SHARED_CONFIG=/dev/vx/rdsk/racdg/rac_srvconfig_100m
$ export SRVM_SHARED_CONFIG

$DBCA_RAW_CONFIG=para_raw_device_file
$export DBCA_RAW_CONFIG

$ DISPLAY=172.16.1.72:0.0
$ export DISPLAY
srvconfig_loc=/var/opt/oracle/srvConfig.loc

root@cmwdb1 # ./orainstRoot.sh
Creating Oracle Inventory pointer file (/var/opt/oracle/oraInst.loc)
Changing groupname of /oracle/oraInventory to dba.


Establish system environment variables

Ø Set a local bin directory in the user's PATH, such as /usr/local/bin, or /opt/bin. It is necessary to have execute permissions on this directory.
Ø Set the DISPLAY variable to point to the system's (from where you will run OUI) IP address, or name, X server, and screen.
Ø Set a temporary directory path for TMPDIR with at least 20 Mb of free space to which the OUI has write permission.

$ set
CLASSPATH=/oracle/9.2.3/JRE:/oracle/9.2.3/jlib:/oracle/9.2.3/rdbms/jlib:/oracle/
9.2.3/network/jlib
DBCA_RAW_CONFIG=para_raw_device_file
DISPLAY=172.16.1.72:0.0
HOME=/oracle/9.2.3
HZ=
IFS=
LANG=C
LOGNAME=oracle
MAIL=/var/mail/oracle
MAILCHECK=600
NLS_DATE_FORMAT=DD-MON-RR
NLS_LANG=AMERICAN_AMERICA.US7ASCII
NLS_NUMERIC_CHARACTER=.,
NLS_SORT=binary
OPTIND=1
ORACLE_BASE=/oracle
ORACLE_HOME=/oracle/9.2.3
ORACLE_TERM=xterm
ORA_NLS33=/oracle/9.2.3/ocommon/nls/admin/data
PATH=/oracle/9.2.3/bin:/usr:/usr/bin:/usr/lib:/etc:/usr/ucb
PS1=$
PS2=>
SHELL=/bin/sh
SRVM_SHARED_CONFIG=/dev/vx/rdsk/racdg/rac_srvconfig_100m
TERM=dtterm
TZ=Asia/Calcutta


Set the /etc/system files.
Kernel Parameter Setting Purpose

SHMMAX 4294967295 Maximum allowable size of one shared memory segment (4 Gb)
SHMMIN 1 Minimum allowable size of a single shared memory segment.
SHMMNI 100 Maximum number of shared memory segments in the entire system.
SHMSEG 10 Maximum number of shared memory segments one process can attach.
SEMMNI 1024 Maximum number of semaphore sets in the entire system.
SEMMSL 500 Minimum recommended value. SEMMSL should be 10 plus the largest PROCESSES parameter of any Oracle database on the system.
SEMMNS 1024 Maximum semaphores on the system. This setting is a minimum recommended value. SEMMNS should be set to the sum of the PROCESSES parameter for each Oracle database, add the largest one twice, plus add an additional 10 for each database.
SEMOPM 100 Maximum number of operations per semop call.
SEMVMX 32767 Maximum value of a semaphore.
(swap space) xx MB Two to four times your system's physical memory size.

set md:mirrored_root_flag=1
set shmsys:shminfo_shmmax=4294967295
set shmsys:shminfo_shmmin=200
set shmsys:shminfo_shmmni=200
set shmsys:shminfo_shmseg=200
set semsys:seminfo_semmsl=2048
set semsys:seminfo_semmns=2048
set semsys:seminfo_semopm=100
set semsys:seminfo_semvmx=32767
set semsys:seminfo_semmni=2048
set semsys:seminfo_semmap=1024
set semsys:seminfo_semmnu=2048
set semsys:seminfo_semume=200

root@cmwdb1 # cat /oracle/9.2.3/para_raw_device_file
system=/dev/vx/rdsk/racdg/system
spfile=/dev/vx/rdsk/racdg/spfile
users=/dev/vx/rdsk/racdg/users
temp=/dev/vx/rdsk/racdg/temp
undotbs1=/dev/vx/rdsk/racdg/undotbs1
undotbs2=/dev/vx/rdsk/racdg/undotbs2
example=/dev/vx/rdsk/racdg/example
cwmlite=/dev/vx/rdsk/racdg/cwmlite
indx=/dev/vx/rdsk/racdg/indx
tools=/dev/vx/rdsk/racdg/tools
drsys=/dev/vx/rdsk/racdg/drsys
control01=/dev/vx/rdsk/racdg/controlfile1
control02=/dev/vx/rdsk/racdg/controlfile2
redo1_1=/dev/vx/rdsk/racdg/log11
redo1_2=/dev/vx/rdsk/racdg/log12
redo2_1=/dev/vx/rdsk/racdg/log21
redo2_2=/dev/vx/rdsk/racdg/log22


Configuring the existing system to RAC... Follow 279956.1 Metalink note.

Pre Clone existing database and Apps server.

DBTier


1. perl adpreclone.pl dbTier

AppsTier


2. perl adpreclone.pl appsTier


Then Copy datafile and Apps file to target node.

=======================================

Post clone Target database

dbTier

1. perl adcfgclone.pl dbTier

appsTier
2. perl adcfgclone.pl appsTier


Take backup control file to trace


sql> alter database backup controlfile to trace


Then copy the datafile to RAW device.

1. Before coping you need to create volume manager on RAW Device.

eg

dd if=/cmwdb1/cmwproddata/system01.dbf of=/dev/vx/rdsk/racdg/system01 bs=8192

2. after coping the datafile, Recreate the controlfile on oracle new home.

new home(9.2.3)
old home(9.2.0.3)

eg,

CREATE CONTROLFILE REUSE SET DATABASE CMWPROD RESETLOGS ARCHIVELOG
-- SET STANDBY TO MAXIMIZE PERFORMANCE
MAXLOGFILES 16
MAXLOGMEMBERS 5
MAXDATAFILES 500
MAXINSTANCES 1
MAXLOGHISTORY 2722
LOGFILE
GROUP 1 (
'/dev/vx/rdsk/racdg/log01a',
'/dev/vx/rdsk/racdg/log01b'
) SIZE 20M,
GROUP 2 (
'/dev/vx/rdsk/racdg/log02a',
'/dev/vx/rdsk/racdg/log02b'
) SIZE 20M,
GROUP 3 (
'/dev/vx/rdsk/racdg/log03a',
'/dev/vx/rdsk/racdg/log03b'
) SIZE 20M
-- STANDBY LOGFILE
DATAFILE
'/dev/vx/rdsk/racdg/system01',
'/dev/vx/rdsk/racdg/system02',
'/dev/vx/rdsk/racdg/system03',
'/dev/vx/rdsk/racdg/system04',
'/dev/vx/rdsk/racdg/system05',
'/dev/vx/rdsk/racdg/ctxd01',
'/dev/vx/rdsk/racdg/owad01',
'/dev/vx/rdsk/racdg/applsysd01',
'/dev/vx/rdsk/racdg/applsysx01',

========================================

database startup

Then,

Copy the appsutil, oui21, appsoui to new home.

verify metalink 165195.1


1. Then run adbldxml.sh /oraclehome/appsutil/bin
2. run adconfig.sh
3. Goto apps tier and run admkappsutil.pl scripts
4. copy appstil.zip to Oracle home
5. unzip appsutil.zip
6. perl adpreclone.pl database
7. perl adcfgclone.pl database

Database one instance created.

You run the same steps to another node:

======================================

zip the appsutil folder in new oracle home 1 and copy to the second node

> perl adcfgclone.pl database
40% error occur

you need to configure thread2

ALTER DATABASE ADD LOGFILE THREAD 2
GROUP 4 ( '/dev/vx/rdsk/racdg/log4a', '/dev/vx/rdsk/racdg/log4b' ) SIZE 20M,
GROUP 5 ( '/dev/vx/rdsk/racdg/log5a', '/dev/vx/rdsk/racdg/log5b' ) SIZE 20M,
GROUP 6 ( '/dev/vx/rdsk/racdg/log6a', '/dev/vx/rdsk/racdg/log6b' ) SIZE 20M;
ALTER DATABASE ENABLE PUBLIC THREAD 2;

========================================

Again you run

$perl adcfgclone.pl database

=========================================
Now you need configure listener file and tnsnames file.
=========================================

RAPID CLONE

PRECLONE STEPS

DATABASE TIER:

[oracle2@apps VISION_apps]$ perl adpreclone.pl dbTier
Running Rapid Clone with command...
perl /appscoe2/appscoe/visiondb/9.2.0/appsutil/bin/adclone.pl java=/appscoe2/appscoe/visiondb/9.2.0/appsoui/jre/1.3.1 mode=stage stage=/appscoe2/appscoe/visiondb/9.2.0/appsutil/clone component=dbTier method=CUSTOM dbctx=/appscoe2/appscoe/visiondb/9.2.0/appsutil/VISION_apps.xml showProgress

Beginning database tier Stage - Fri Aug 25 16:57:34 2006
APPS Password : apps
Log file located at /appscoe2/appscoe/visiondb/9.2.0/appsutil/log/VISION_apps/StageDBTier_08250457.log
Completed Stage...
Fri Aug 25 17:13:46 2006

APPS TIER:

[oracle2@apps VISION_apps]$ perl adpreclone.pl appsTier
Running Rapid Clone with command...
perl /appscoe2/appscoe/visionappl/ad/11.5.0/bin/adclone.pl java=/appscoe2/appscoe/visionora/iAS/appsoui/jre/1.3.1 mode=stage stage=/appscoe2/appscoe/visioncomn/clone component=appsTier method=CUSTOM appctx=/appscoe2/appscoe/visionappl/admin/VISION_apps.xml showProgress

Beginning application tier Stage - Fri Aug 25 17:21:45 2006
Log file located at /appscoe2/appscoe/visionappl/admin/VISION_apps/log/StageAppsTier_08250521.log
Completed Stage...
Fri Aug 25 17:29:48 2006

Now all the tops need to be manually copied from the source system to the destination system
And they need to be renamed. Say if we are cloning dev from prod, then prodappl, prodcomn etc need to be renamed as devappl, devcomn etc.
mv prodappl devappl


POST CLONE:

DATABASE TIER:

[oracle3@apps bin]$ perl adcfgclone.pl dbTier
Enter the APPS password [APPS]:
apps

First Creating a new context file for the cloned system.
The program is going to ask you for information about the new system:


Provide the values required for creation of the new Database Context file.

Do you want to use a virtual hostname for the target node (y/n) [n] ?:n

Target instance is a Real Application Cluster (RAC) instance (y/n) [n]:n

Target System database name [VISION]:CLONE

Target system RDBMS ORACLE_HOME directory [/appscoe2/appscoe/visiondb/9.2.0]:/appscoe3/appsclone/clonedb/9.2.0

Target system utl_file accessible directories list [/usr/tmp]:

Number of DATA_TOP's on the target system [1]:1

Target system DATA_TOP 1 [/appscoe2/appscoe/visiondata]:/appscoe3/appsclone/clonedata

Do you want to preserve the Display set to apps:0.0 (y/n) [y] ?:y

Clone Context uses the same port pool mechanism as the Rapid Install
Once you choose a port pool, Clone Context will validate the port availability.

Enter the port pool number [0-99]:
10

Checking the port pool 10
done: Port Pool 10 is free
Database port is 1531

Creating the new Database Context file from :
/appscoe2/appscoe/visiondb/9.2.0/appsutil/template/adxdbctx.tmp

The new database context file has been created :
/appscoe3/appsclone/clonedb/9.2.0/appsutil/CLONE_apps.xml

Log file located at /appscoe3/appsclone/clonedb/9.2.0/appsutil/clone/bin/CloneContext_08271007.log

Running Rapid Clone with command:
perl /appscoe3/appsclone/clonedb/9.2.0/appsutil/clone/bin/adclone.pl java=/appscoe3/appsclone/clonedb/9.2.0/appsutil/clone/bin/../jre mode=apply stage=/appscoe3/appsclone/clonedb/9.2.0/appsutil/clone/bin/.. component=dbTier method=CUSTOM dbctxtg=/appscoe3/appsclone/clonedb/9.2.0/appsutil/CLONE_apps.xml showProgress contextValidated=true

Beginning database tier Apply - Sun Aug 27 10:09:36 2006
Log file located at /appscoe3/appsclone/clonedb/9.2.0/appsutil/log/CLONE_apps/ApplyDBTier_08271009.log
Completed Apply...
Sun Aug 27 10:12:57 2006

Beginning APPSDB_CLONE registration to central inventory...

ORACLE_HOME NAME : APPSDB_CLONE
ORACLE_HOME PATH : /appscoe3/appsclone/clonedb/9.2.0
Using Inventory location in /etc/oraInst.loc
Log file located at /etc/oraInventory/logs/OracleHomeCloner_08271012.log
AC-00002: Error: Unable to create log file - /etc/oraInventory/logs/OracleHomeCloner_08271012.log
AC-00002: Error: Unable to create log file - /etc/oraInventory/logs/OracleHomeCloner_08271012.log
Exception in thread "main" java.lang.RuntimeException: AC-00002: Unable to create log file - /etc/oraInventory/logs/OracleHomeCloner_08271012.log
Raised by oracle.apps.ad.util.LogFile
at oracle.apps.ad.util.LogFile.appendLogFile(LogFile.java:356)
at oracle.apps.ad.util.LogFile.log(LogFile.java:233)
at oracle.apps.ad.util.LogFile.info(LogFile.java:212)
at oracle.apps.ad.clone.util.OracleHomeCloner.registerOracleHome(OracleHomeCloner.java:307)
at oracle.apps.ad.clone.util.OracleHomeCloner.registerOracleHome(OracleHomeCloner.java:165)
at oracle.apps.ad.clone.util.OracleHomeCloner.main(OracleHomeCloner.java:1122)

ERROR: Registration Failed... Please check log file.

You can rerun this registration with the following script:
/appscoe3/appsclone/clonedb/9.2.0/appsutil/out/CLONE_apps/regOUI_APPSDB_CLONE.sh

Starting database listener for CLONE:
Running:
/appscoe3/appsclone/clonedb/9.2.0/appsutil/scripts/CLONE_apps/addlnctl.sh start CLONE

You are running addlnctl.sh version 115.6

Logfile: /appscoe3/appsclone/clonedb/9.2.0/appsutil/log/CLONE_apps/addlnctl.txt

Starting listener process CLONE ...


Listener CLONE has already been started.


addlnctl.sh: exiting with status 0

Due to this error, this code was required to be rerun again.

[oracle3@apps CLONE_apps]$ ./regOUI_APPSDB_CLONE.sh
Using Inventory location in /etc/oraInst.loc
Log file located at /etc/oraInventory/logs/OracleHomeCloner_08271148.log
ORACLE_HOME /appscoe3/appsclone/clonedb/9.2.0 was registered successfully.
ERRORCODE = 0 ERRORCODE_END

APPS TIER:

[oracle3@apps bin]$ perl adcfgclone.pl appsTier
Enter the APPS password [APPS]:
apps

First Creating a new context file for the cloned system.
The program is going to ask you for information about the new system:


Provide the values required for creation of the new APPL_TOP Context file.

Do you want to use a virtual hostname for the target node (y/n) [n] ?:n

Target system database SID [VISION]:CLONE

Target system database server node [apps]:

Target system database domain name [tcs.com]:

Does the target system have more than one application tier server node (y/n) [n] ?:n

Is the target system APPL_TOP divided into multiple mount points (y/n) [n] ?:n

Target system APPL_TOP mount point [/appscoe2/appscoe/visionappl]:/appscoe3/appsclone/cloneappl

Target system COMMON_TOP directory [/appscoe2/appscoe/visioncomn]:/appscoe3/appsclone/clonecomn

Target system 8.0.6 ORACLE_HOME directory [/appscoe2/appscoe/visionora/8.0.6]:/appscoe3/appsclone/cloneora/8.0.6

Target system iAS ORACLE_HOME directory [/appscoe2/appscoe/visionora/iAS]:/appscoe3/appsclone/cloneora/iAS

Do you want to preserve the Display set to apps:0.0 (y/n) [y] ?:y

Clone Context uses the same port pool mechanism as the Rapid Install
Once you choose a port pool, Clone Context will validate the port availability.

Enter the port pool number [0-99]:
10

Checking the port pool 10
done: Port Pool 10 is free
Web Listener port is 8010
Complete port information available at /appscoe3/appsclone/cloneappl/admin/out/CLONE_apps/portpool.lst

UTL_FILE_DIR on database tier consists of the following directories.

1. /usr/tmp
2. /usr/tmp
3. /appscoe3/appsclone/clonedb/9.2.0/appsutil/outbound/CLONE_apps
4. /usr/tmp
Choose a value which will be set as APPLPTMP value on the target node [1]:1

Creating the new APPL_TOP Context file from :
/appscoe2/appscoe/visionappl/ad/11.5.0/admin/template/adxmlctx.tmp

The new APPL_TOP context file has been created :
/appscoe3/appsclone/cloneappl/admin/CLONE_apps.xml

Log file located at /appscoe3/appsclone/clonecomn/clone/bin/CloneContext_08271201.log

Running Rapid Clone with command:
perl /appscoe3/appsclone/clonecomn/clone/bin/adclone.pl java=/appscoe3/appsclone/clonecomn/clone/bin/../jre mode=apply stage=/appscoe3/appsclone/clonecomn/clone/bin/.. component=appsTier method=CUSTOM appctxtg=/appscoe3/appsclone/cloneappl/admin/CLONE_apps.xml showProgress contextValidated=true

Beginning application tier Apply - Sun Aug 27 12:04:22 2006
Log file located at /appscoe3/appsclone/cloneappl/admin/CLONE_apps/log/ApplyAppsTier_08271204.log
Completed Apply...
Sun Aug 27 12:11:45 2006

Beginning APPSIAS_CLONE registration to central inventory...

ORACLE_HOME NAME : APPSIAS_CLONE
ORACLE_HOME PATH : /appscoe3/appsclone/cloneora/iAS
Using Inventory location in /etc/oraInst.loc
Log file located at /etc/oraInventory/logs/OracleHomeCloner_08271211.log
ORACLE_HOME /appscoe3/appsclone/cloneora/iAS was registered successfully.

Starting application Services for CLONE:
Running:
/appscoe3/appsclone/clonecomn/admin/scripts/CLONE_apps/adstrtal.sh APPS/

Creating a Custom Application in Applications 11i

Creating a Custom Application in Applications 11i


Custom Applications are required if you are creating new forms, reports, etc. This allows you to segregate your custom written files from the standard seeded functionality that Oracle Applications provide. Customizations can therefore be preserved when applying patches or upgrades to your environment.


1) Make the directory structure for your custom application files.

cd $APPL_TOP

mkdir mz

mkdir mz/11.5.0

mkdir mz/11.5.0/admin

mkdir mz/11.5.0/admin/sql

mkdir mz/11.5.0/admin/odf

mkdir mz/11.5.0/sql

mkdir mz/11.5.0/bin

mkdir mz/11.5.0/reports

mkdir mz/11.5.0/reports/US

mkdir mz/11.5.0/forms

mkdir mz/11.5.0/forms/US

mkdir mz/11.5.0/$APPLLIB

mkdir mz/11.5.0/$APPLOUT

mkdir mz/11.5.0/$APPLLOG


2) Add the custom module into the environment


Apply ADX.E.1 and add the entry to topfile.txt as a standard product top entry (follow the existing model in the file)


Customised environment variables can be added to AutoConfig by using the filename specificed by s_custom_file, which is then called from the APPSORA.env file.


If using Forms Listener Servlet, you may also need to add $CUSTOM_TOP to formsservlet.ini in $APACHE_TOP/Jserv/etc


3) Create new tablespace for database objects

create tablespace MZ datafile '/emea/oracle/visuk09/visuk09data/mz.dbf' size 10M default storage(initial 10k next 10k)


4) Create schema

create user mz identified by mz default tablespace mz

temporary tablespace temp quota unlimited on mz quota unlimited on temp;

grant connect, resource to mz;


5) Register your Oracle Schema.

Login to Applications with System Administrator responsibility

Navigate to Application-->Register

Application = MZ Custom

Short Name = MZ

Basepath = MZ_TOP

Description = MZ Custom Application


6) Register Oracle User

Naviate to Security-->Oracle-->Register

Database User Name = MZ

Password = MZ

Privilege = Enabled

Install Group = 0

Description = MZ Custom Application User


7) Add Application to a Data Group

Navigate to Security-->Oracle-->DataGroup

Data Group = MZGroup

Description = MZ Custom Data Group

Click on "Copy Applications from" and pick Standard data Group, then add the following entry.

Application = MZ Custom

Oracle ID = APPS

Description = MZ Custom Application


8) Create custom request group

This will act as a placeholder for any custom reports we wish to make available for the Custom Responsibility (which is defined at a later stage)

Navigate to Security-->responsbility-->Request

Group = MZ Request Group

Application = MZ Custom

Code = MZ

Description = MZ Custom Requests

We will not define any requests to add to the group at this stage, but you can add some now if required.


9) Create custom menu

This will act as a placeholder for any menu items we wish to make available for the Custom Responsibility (which is defined at a later stage) We will create two menus, one for Core Applications and one for Self Service.

Navigate to Application-->Menu

Menu = MZ_CUSTOM_MENU

User Menu Name = MZ Custom Application

Menu Type =

Description = MZ Custom Application Menu

Seq = 100

Prompt = View Requests

Submenu =

Function = View All Concurrent Requests

Description = View Requests
Seq = 110

Prompt = Run Requests

Submenu =

Function = Requests: Submit

Description = Submit Requests
Menu = MZ_CUSTOM_MENU_SSWA

User Menu Name = MZ Custom Application SSWA

Menu Type =

Description = MZ Custom Application Menu for SSWA


10) Create new responsibility.

One for Core Applications and One for Self Service (SSWA)

Navigate to Security-->Responsibility-->Define

Responsibility Name = MZ Custom

Application = MZ Custom

Responsibility Key = MZCUSTOM

Description = MZ Custom Responsibility

Available From = Oracle Applications

Data Group Name = mzGroup Data Group

Application = MZ Custom

Menu = MZ Custom Application

Request Group Name = MZ Request Group
Responsibility Name = MZ Custom SSWA

Application = MZ Custom



Responsibility Key = MZCUSTOMSSWA

Description = MZ Custom Responsibility SSWA

Available From = Oracle Self Service Web Applications

Data Group Name = mzGroup

Data Group Application = MZ Custom

Menu = MZ Custom Application SSWA

Request Group Name = MZ Request Group


11) Add responsibility to user

Navigate to Security-->User-->DefineAdd MZ Custom responsibility to users as required.


12) Other considerations

You are now ready to create your database Objects, custom Reports, Forms, Packages, etc


Create the source code files in the MZ_TOP directory appropriate for the type of object. For example formswould be located in $MZ_TOP/forms/US or package source code in $MZ_TOP/admin/sql for example.


Database Objects, such as tables, indexes and sequences should be created in the MZ schema, then you need to

a) Grant all privilege from each custom data object to the APPS schema.

For example : logged in as MZ user

SQL> grant all privileges on myTable to apps;


b) Create a synonym in APPS for each custom data object For example : logged in as APPS user

create synonym myTable for mz.myTable;


Other database objects, such as views and packages should be created directly in the APPS schema.


RELATED DOCUMENTS

Oracle Applications Release 11i Developers Guide.

Oracle Applications Release 11i System Administrators Guide.


Additional Search Words

11i custom customisation core apps


Step By Step Guide to Creating a Custom Application in Applications 11i
Doc ID:
Note:216589.1

While i convert report text to pdf i got error

Current system time is 09-JUL-2007 14:28:38
+---------------------------------------------------------------------------+

+-----------------------------
Starting concurrent program execution...
+-----------------------------
Arguments
------------
P_FROM_TRX_DATE='01-JUL-2007'
P_TO_TRX_DATE='15-JUL-2007'
P_FROM_CURRENCY_NO='KWD'
P_TO_CURRENCY_NO='KWD'
------------
Current NLS_LANG and NLS_NUMERIC_CHARACTERS Environment Variables are :
American_America.AR8ISO8859P6
'.,'
X connection to deverp:0.0 broken (explicit kill or server shutdown).
Report Builder: Release 6.0.8.25.0 - Production on Mon Jul 9 14:28:39 2007
(c) Copyright 1999 Oracle Corporation. All rights reserved.
Enter Username:
+---------------------------------------------------------------------------+
Start of log messages from FND_FILE
+---------------------------------------------------------------------------+
+---------------------------------------------------------------------------+
End of log messages from FND_FILE
+---------------------------------------------------------------------------+
Program exited with status 1
Concurrent Manager encountered an error while running Oracle*Report for your concurren
t request 299656.
Review your concurrent request log and/or report output file for more detailed information.

+---------------------------------------------------------------------------+
Executing request completion options...

Finished executing request completion options.
+---------------------------------------------------------------------------+
Concurrent request completed
Current system time is 09-JUL-2007 15:15:12
+---------------------------------------------------------------------------+
.
UPDATE

1. Is this issue happen to all requests? (try on active users as an example)
2. Was it working before? If Yes please inform me with the recent changes?
3. Provide the output of the following in Excel sheet in a readable format
replace with the request id that you submitted for the report.
(a)
SELECT NUMBER_OF_COPIES,
NLS_LANGUAGE,
NLS_TERRITORY,
PRINTER,
PRINT_STYLE,
COMPLETION_TEXT,
OUTPUT_FILE_TYPE,
NLS_CODESET,
OUTFILE_NODE_NAME,
OUTFILE_NAME
FROM FND_CONCURRENT_REQUESTS
WHERE REQUEST_ID = <>;
(b)
SELECT PRINTER_STYLE_NAME, SRW_DRIVER, WIDTH, LENGTH, ORIENTATION
FROM FND_PRINTER_STYLES
WHERE PRINTER_STYLE_NAME =
(SELECT PRINT_STYLE
FROM FND_CONCURRENT_REQUESTS
where request_id = <>);
(c)
SELECT PRINTER_DRIVER_NAME,
USER_PRINTER_DRIVER_NAME,
PRINTER_DRIVER_METHOD_CODE,
SPOOL_FLAG,
SRW_DRIVER,
COMMAND_NAME,
ARGUMENTS,
INITIALIZATION,
RESET
FROM FND_PRINTER_DRIVERS
WHERE PRINTER_DRIVER_NAME =
(SELECT PRINTER_DRIVER
FROM FND_PRINTER_INFORMATION
WHERE PRINTER_STYLE =
(SELECT PRINT_STYLE
FROM FND_CONCURRENT_REQUESTS
where request_id = <>)
AND PRINTER_TYPE =
(SELECT PRINTER_TYPE
FROM FND_PRINTER
WHERE PRINTER_NAME =
(SELECT PRINTER
FROM FND_CONCURRENT_REQUESTS
where request_id = <>)))
(d)
SELECT *
FROM FND_ENV_CONTEXT
WHERE CONCURRENT_PROCESS_ID =
(SELECT CONCURRENT_PROCESS_ID
FROM FND_CONCURRENT_PROCESSES A, FND_CONCURRENT_REQUESTS B
WHERE B.CONTROLLING_MANAGER = A.CONCURRENT_PROCESS_ID
AND B.REQUEST_ID = <>)
AND (VARIABLE_NAME IN
('XENVIRONMENT', 'IX_PRINTING', 'ORACLE_HOME', 'LD_LIBRARY_PATH',
'NLS_LANG', 'REPORTS60_NO_DUMMY_PRINTER', 'TK6_PRINT_STATUS',
'TK6_PRINTER', 'TK_PRINTER', 'PRINTER', 'DISPLAY', 'TK_LOCALE',
'PASTA', 'APPLLCSP', 'ORACLE_PRINTER', 'TK_PRINT_STATUS') OR
(VARIABLE_NAME LIKE '%REPORT%' OR VARIABLE_NAME LIKE 'IX_%'));


ISSUE CLARIFICATION
====================
On 11.5.10.2 in Production:
When attempting to submit any PDF report ,
the following error occurs.
ERROR
-----------------------
X connection to deverp:0.0 broken (explicit kill or server shutdown).
STEPS
-----------------------
The issue can be reproduced at will with the following steps:
1. For any concurrent program change the output format to PDF
2. Submit this concurrent request
3. Face the above error
BUSINESS IMPACT
-----------------------
The issue has the following business impact:
Due to this issue, users cannot Submit any PDF concurrent Requests.

.
ISSUE VERIFICATION
===================
Verified the issue in the log file as noted below:
LOG FILE
-----------------------
Filename =Logl299966_req.htm
See the following error:
X connection to deverp:0.0 broken (explicit kill or server shutdown).

DATA COLLECTED
===============
Query Results
--------------
NUMBER_OF_COPIES 0
NLS_LANGUAGE AMERICAN AMERICA
NLS_TERRITORY AMERICA
PRINTER noprint
PRINT_STYLE LANDSCAPE
COMPLETION_TEXT "Concurrent Manager encountered an error
OUTPUT_FILE_TYPE PDF
OUTFILE_NAME /d02/appsdev/devcomn/admin/out/DEV_deverp/o299966.out
PRINTER_DRIVER_NAME LANDSCAPEHPLJ4SI
USER_PRINTER_DRIVER_NAME LANDSCAPE for Printer HPLJ4SI
PRINTER_DRIVER_METHOD_CODE
SPOOL_FLAG
SRW_DRIVER HPL
COMMAND_NAME lpr -P$PROFILES$.PRINTER -#$PROFILES$.CONC_COPIES -T"$PROFILES$.TITLE" $PROFILES$.FILEN
AME
ARGUMENTS /eE/e&l1o2a5.45C/e(s0p16.66H/e&k6.75H
INITIALIZATION /eE
DISPLAY deverp:0.0
IX_PRINTING /fnd/11.5.0/resource/ixlib.cfg
IX_RENDERING /fnd/11.5.0/resource/pasta.cfg
NLS_LANG American_America.AR8ISO8859P6
XENVIRONMENT /guicommon6/tk60/admin/Tk2Motif_UTF8.rgb
XML_REPORTS_XENVIRONMENT /guicommon6/tk60/admin/Tk2Motif_UTF8.rgb
.
RESEARCH
=========
Note.252715.1 X Connection To 1.0 Broken - Explicit Kill Or Server Shutdown
Note.230810.1 X Display Server Connection Broken When Printing Reports

Please perform the following:
ACTION PLAN
============
1. Was the PDF reports working before? If Yes please inform me with the recent changes?
2. - Login to the machine where Reports Server is installed.
- From the UNIX command prompt, set the DISPLAY to a valid X Windows session:
setenv DISPLAY :0.0
replace the host name with the ipaddress of the host
- Ensure that DISPLAY was set correctly:
echo $DISPLAY
- Type command "xclock"
Did the clock came up?
3. Try the command (xhost +) in the report server ?
4. Issue the following command on that server
xterm -display .:0.0
or
xterm -display :0.0
Note: the xterm should open a window on the display
:0.0
- verify that a window is open on the display
use the applmgr user not the root user in this step
5. Issue the following command on that server
xdpyinfo -display .:0.0
or
xdpyinfo -display :0.0
Note: the xdpyinfo should show some information about the display
6. In the Application go to HELP > Diagnostic > Examine -
Click on the LOV for Block and choose
$ENVIRONMENT$
Then go to the 'Field' Section and type in 'DISPLAY'
If "There is no field named DISPLAY under $ENVIRONMENT$ block."
Then just Type it then Click on Value
This will show the Forms Server DISPLAY setting
Please update the SR with this value and confirm that it is the correct display
7. Run the request (Prints environment variable values) and put in the parameters
(Display). Please update the SR with this value and confirm that it is the correct
display
8. Does a report in Postscript format run successfully from the OS command line?
a) Login as the applmgr on the host where the concurrent processing server resides
and source the main environment file.
b) Set the DISPLAY environment variable, if not already set by the main environment file.
echo $DISPLAY
DISPLAY=:0.0 ; export DISPLAY
c) Run the "Active Users" report in Postscript format with the ar60runb executable
and the following options.
Provide the apps password an populate the parameters with
the full path of $FND_TOP and $APPLTMP.
ar60runb userid=apps/apps report=/<$FND_TOP>/reports/US/FNDSCURS.rdf batch=yes destype=file
mode=bitmap desname=/<$APPLTMP>/ActUsr.ps des
format=/<8.0.6$oracle_home>/reports60/admin/printer/psl132.prt errfile=//ActUsr.log

ACTION PLAN
============
1. In the file Tk2Motif_UTF8.rgb
change the following line
Tk2Motif*fontMapCs: iso8859-1=UTF8
to
Tk2Motif*fontMapCs: iso8859-1=AR8ISO8859P6
2. In the file Tk2Motif.rgb
add the following line
Tk2Motif*fontMapCs: iso8859-1=AR8ISO8859P6
3. Rebounce the managers and retest the issue and tell me the results


Helpful search results follow from AutoDiagnose:


1. Note 179801.1 Printing and Previewing Arabic Reports using PASTA in Oracle Applications 11i
2. Note 139654.1 Oracle Applications Release 11i with UTF8 FAQ
3. Note 179801.1 Printing and Previewing Arabic Reports using PASTA in Oracle Applications 11i
4. Note 159225.1 PASTA Required For Printing With UTF8 For Any Release of 11. 5 (Pasta ver 2.0.1)
5. Note 115233.1 How To View Web Report containing Arabic Data
6. Note 115232.1 Arabic Decimal Symbol Appears Incorrectly in Web HTML Reports
7. Note 199429.1 Arabic Appears As Question Marks In Reports
The correct font for Arabic is:
Font.Default.Plain=//ADUO.ttf
Font.Default.Bold=//ADUOB.ttf

FNDLOAD

FNDLOAD
Last week i had spent a considerable amount of time working with quieries involving the FNDLOAD utility.

The FNDLOAD utility also know as the Generic loader in Oracle Applications is a concurrent program that can move data across applications instances using database and text file representations.

The FNDLOAD downloads the data from the database using a oracle predefined configuration file (.lct) and stores the data in a converted data file (.ldt).Similarly while using FNDLOAD to upload data, the loader uploads the data from the data file created by the previous download based on the configuration file.

The FNDLOAD does the download and upload based on the instructions given the configuration files(.lct),these are predefined by oracle and can be located at $FND_TOP/patch/115/import.
You could also create custom configuration files and place them here.

The data files created by the download process and used for the upload are in the form of of text files and hence portable. It is not recommended to modify the data files manually.

Oracle currently supports the migration of the following types of data using FNDLOAD
Printers / Print queues / Executables Printers / Print queues / Executables.

Roles / Responsibilities / Forms Roles / Responsibilities / Forms.
Menus / Users / Request Sets Menus / Users / Request Sets.
Request Groups / Request Queues Request Groups / Request Queues.
Work shifts / Programs / Libraries Work shifts / Programs / Libraries.
Attachments / Help Files Attachments / Help Files.
Mime Types Mime Types.
Security Information.


FNDLOAD utility uses the OWNER and LAST_UPDATE_DATE attributes to ensure data preservation. The data with OWNER=CUSTOM is always given preference over OWENR=SEED. That is the CUSTOM data will never be overwritten by the SEED data. In a case where the OWNERS are same the LAST_UPDATE_DATE is given preference that is a row with the LAST_UPDATE_DTE higher is inserted.

The FNDLOAD executable can be located at $FND_TOP/bin.

syntax

FNDLOAD [username/password] 0 Y [mode] [configuration file] [target data file] [object type key] [object Identifier]
Where
0 Y represent your concurrent program flags.[username/password] your apps username and password[mode] might be download or upload[configuration file] might be the .lct file like afcpprog.lct[target data file] this the .ldt file created while download(or used for upload) like someprg.ldt[object type key] this would represent you object type like CONCURRENT_PROGRAM[object Identifier] this would be your object name like CONCURRENT_PROGRAM_NAME='XXACMEPRT'
I am presenting a few FNDLOAD commands used frequently by me
Some of the below examples here are orignally from Anil Passi's site and a full list can be found at(http://oracle.anilpassi.com/oracle-fndload-script-examples.html)

Concurrent Program

$FND_TOP/bin/FNDLOAD apps/apps@DB_INSTANCE 0 Y DOWNLOAD $FND_TOP/admin/import/afcpprog.lct XXSAM_PRG.ldt PROGRAMAPPLICATION_SHORT_NAME='XSPRT' CONCURRENT_PROGRAM_NAME='XX_SAM_CUSTOM_PRT'


$FND_TOP/bin/FNDLOAD apps/apps@DB_INSTANCE 0 Y UPLOAD $FND_TOP/patch/115/import/afcpprog.lct XXSAM_PRG.ldt

Request Sets

This will happen in two stages first for the request set defination and then the underlying links.

$FND_TOP/bin/FNDLOAD apps/apps@DB_INSTANCE DOWNLOAD 0 Y $FND_TOP/patch/115/import/afcprset.lct XX_SAM_REQ_SET_DEF.ldt REQ_SETREQUEST_SET_NAME="XX_SAM_REQ_SET"


$FND_TOP/bin/FNDLOAD apps/apps@DB_INSTANCE DOWNLOAD $FND_TOP/patch/115/import/afcprset.lct XX_SAM_REQ_SET_LK.ldtREQ_SET_LINKS REQUEST_SET_NAME="XX_SAM_REQ_SET"


Now upload the defination first and then the links

$FND_TOP/bin/FNDLOAD apps/apps@DB_INSTANCE 0 Y UPLOAD $FND_TOP/patch/115/import/afcprset.lct XX_SAM_REQ_SET_DEF.ldt

$FND_TOP/bin/FNDLOAD apps/apps@DB_INSTANCE 0 Y UPLOAD $FND_TOP/patch/115/import/afcprset.lct XX_SAM_REQ_SET_LK.ldt


Value Set

$FND_TOP/bin/FNDLOAD apps/apps@DB_INSTANCE 0 Y DOWNLOAD $FND_TOP/admin/import/afffload.lct XXSAMVALUESET.ldt VALUE_SETFLEX_VALUE_SET_NAME='XX_SAM_VALUE_SET'


$FND_TOP/bin/FNDLOAD apps/apps@DB_INSTANCE 0 Y UPLOAD $FND_TOP/patch/115/import/afffload.lct XXSAMVALUESET.ldt


Request Groups


$FND_TOP/bin/FNDLOAD apps/apps@DB_INSTANCE O Y DOWNLOAD $FND_TOP/patch/115/import/afcpreqg.lct XX_SAM_GROUP.ldt REQUEST_GROUPREQUEST_GROUP_NAME="XX_SAM_GROUP" APPLICATION_SHORT_NAME="XXSAM"

$FND_TOP/bin/FNDLOAD apps/apps@DB_INSTANCE O Y UPLOAD $FND_TOP/patch/115/import/afcpreqg.lct XX_SAM_GROUP.ldt

Responsiblity

$FND_TOP/bin/FNDLOAD apps/apps@DB_INSTANCE 0 Y DOWNLOAD $FND_TOP/patch/115/import/afscursp.lct XXSAMCUSTRESP.ldtFND_RESPONSIBILITY RESP_KEY="XX_SAM_CUSTRESP"


$FND_TOP/bin/FNDLOAD apps/apps@DB_INSTANCE O Y UPLOAD $FND_TOP/patch/115/import/afscursp.lct XXSAMCUSTRESP.ldt


Menus


$FND_TOP/bin/FNDLOAD apps/apps@DB_INSTANCE 0 Y DOWNLOAD $FND_TOP/patch/115/import/afsload.lct XXSAM_MENU.ldt MENUMENU_NAME="XX_SAM_MENU"


$FND_TOP/bin/FNDLOAD apps/apps@DB_INSTANCE O Y UPLOAD $FND_TOP/patch/115/import/afsload.lct XXSAM_MENU.ldt


Messages


This will download all messages under the custom application XXFL


$FND_TOP/bin/FNDLOAD apps/apps@DB_INSTANCE 0 Y DOWNLOAD $FND_TOP/patch/115/import/afmdmsg.lct XX_FL_MSG.ldt FND_NEW_MESSAGESAPPLICATION_SHORT_NAME='XXFL'


$FND_TOP/bin/FNDLOAD apps/apps@DB_INSTANCE O Y UPLOAD $FND_TOP/patch/115/import/afmdmsg.lct XX_FL_MSG.ldt


You may also download and upload indivdual messages.


Lookup Values

$FND_TOP/bin/FNDLOAD apps/apps@DB_INSTANCE 0 Y DOWNLOAD $FND_TOP/patch/115/import/aflvmlu.lct XX_SAM_PO_VAL.ldtFND_LOOKUP_TYPE APPLICATION_SHORT_NAME ='XXPO' LOOKUP_TYPE="XX_SAM_PO_VAL"


$FND_TOP/bin/FNDLOAD apps/apps@DB_INSTANCE O Y UPLOAD $FND_TOP/patch/115/import/aflvmlu.lct XX_SAM_PO_VAL.ldt


Profile Options


$FND_TOP/bin/FNDLOAD apps/apps@DB_INSTANCE $FND_TOP/patch/115/import/afscprof.lct SAMPRF.ldt PROFILE PROFILE_NAME="profile option" APPLICATION_SHORT_NAME="FND"


$FND_TOP/bin/FNDLOAD apps/apps@DB_INSTANCE UPLOAD$FND_TOP/patch/115/import/afscprof.lct SAMPRF.ldt

Oracle Apps 11i: DBC creation

Oracle Apps 11i: DBC creation

Oracle Apps 11i: Create Recovery Catalog

Oracle Apps 11i: Create Recovery Catalog

Oracle Apps 11i: Creating a RMAN Clone Database and Apps 11i on a New Host

Oracle Apps 11i: Creating a RMAN Clone Database and Apps 11i on a New Host

Creating a RMAN Clone Database and Apps 11i on a New Host

Primary Database SID: PROD
Duplicate Database SID: TEST
RMAN Catalog SID: RMAN


Steps

1. Backup the primary database.

2. Determine how much disk space will be required.

3. Ensuring you have enough space on your target server.

4. Making the backup available for the duplicate process.

5. Creating the init.ora & administration directories for the duplicate database.

6. Prepare RMAN duplicate script.

7. Execute the RMAN script.

1. Backup of the primary database.

Host A (Target)

Prior to creating a duplicate database you must have a backup of the target database. In this example we assume the backup media is disk. Find appropriate disk space on host A and take a full backup including archive logs and current controlfile. If you are planning to duplicate a subset of the database refer to (Figure 1b) this illustrates the RMAN command to backing up certain tablespaces.
[oracle@AIX] export ORACLE_SID=PROD

[oracle@AIX] rman target=/ catalog=rman/rman@RMAN

RMAN> run {
allocate channel d1 type disk;
backup format '/d01/backup/PROD/datafile_t%t_s%s_p%p' database;
sql 'alter system archive log current';
backup format '/d01/backup/PROD/alog_t%t_s%s_p%p' archivelog all;
release channel d1;
}
This above command will perform a full database backup including archivelogs and the current controlfile.

2. Determine how much disk space will be required.

Host A(Target) -

After deciding what you will be duplicating, calculate the entire space this will require on the new host. The full database calculation will calculate the entire space required whereas allows you to enter the tablespace names in order to calculate the space required.

SQL> select DF.TOTAL/1048576 "DataFile Size Mb",
LOG.TOTAL/1048576 "Redo Log Size Mb",
(DF.TOTAL + LOG.TOTAL)/1048576 "Total Size Mb" from dual,
(select sum(a.bytes) TOTAL from dba_data_files a) DF,
(select sum(b.bytes) TOTAL from v$log b) LOG;

Calculate space for list of tablespaces within primary database
DataFile Size Mb Redo Log Size Mb Total Size Mb
---------------- ---------------- -------------
31860 20 31880

3. Ensuring you have enough space on your target server.

Host B (TEST)

Prior to starting the duplicate process you must ensure you have enough disk space within file system to hold the database
appltest@test : /d05> df -sg
Filesystem GB blocks Free* %Used Iused %Iused Mounted on
/dev/hd4 0.25 0.18 27% 5262 11% /
/dev/hd2 2.75 0.35 88% 38731 32% /usr
/dev/hd9var 1.25 0.08 94% 1404 8% /var
/dev/hd3 1.00 0.32 68% 2860 4% /tmp
/dev/fwdump 0.50 0.50 1% 4 1% /var/adm/ras/platform
/dev/hd1 6.25 5.94 5% 4411 1% /home
/proc - - - - - /proc
/dev/hd10opt 0.75 0.25 67% 14037 20% /opt
/dev/locallv 0.25 0.25 1% 71 1% /usr/local
/dev/disk02lv 260.00 33.54 88% 2131066 18% /d05

4. Making the backup available for the duplicate process.

If your backup resides on disk you will need to copy this back up from host A to host B. Ensure you place it in the same directory as where it was created. In the example below the backup piece resides in ‘/backup/PROD’ these files need to be copied into the same directory on host B.

Furthermore also ensure than all archive log backups have also been moved and any archive logs in the archive log directory that may be require for the recovery. The archive logs required will depend on the point in time of the recovery.
RMAN> list backup;
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
22 Full 529M DISK 00:00:51 2007/08/25 12:19:05
BP Key: 22 Status: AVAILABLE Compressed: NO Tag: TAG20070825T111203
Piece Name: /backups/PROD/datafile_t598321425_s23_p1
List of Datafiles in backup set 22
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- ------------------- ----
1 Full 1393845 2007/08/25 11:12:03 /oradata/PROD/system01.dbf
2 Full 1393845 2007/08/25 11:12:03 /oradata/PROD/undotbs01.dbf
3 Full 1393845 2007/08/25 11:12:03 /oradata/PROD/sysTEST01.dbf
4 Full 1393845 2007/08/25 11:12:03 /oradata/PROD/users01.dbf
BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ -------------------
24 48M DISK 00:00:06 2007/08/25 12:58:55
BP Key: 24 Status: AVAILABLE Compressed: NO Tag: TAG20070825T111301
Piece Name: /backups/PROD/alog_t598321429_s25_p1
List of Archived Logs in backup set 24
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- ------------------- ---------- ---------
1 78 1344750 2007/08/24 04:55:34 1353208 2007/08/24 10:00:19
1 79 1353208 2007/08/24 10:00:19 1353213 2007/08/24 10:00:20
1 80 1353213 2007/08/24 10:00:20 1372464 2007/08/24 22:00:11
1 81 1372464 2007/08/24 22:00:11 1373291 2007/08/24 22:00:59
1 82 1373291 2007/08/24 22:00:59 1381066 2007/08/25 03:00:08
1 83 1381066 2007/08/25 03:00:08 1390685 2007/08/25 09:03:00
1 84 1390685 2007/08/25 09:03:00 1393870 2007/08/25 11:13:00
1 85 1393870 2007/08/25 11:13:00 1393929 2007/08/25 11:13:00

5. Creating the init.ora & administration directories for the duplicate database.
# +----------------------------------------+
# FILE : initTEST.ora
# DATABASE NAME : TEST
# +----------------------------------------+
# Set the below to location of the clone Duplicate database / name of
# clone database.
# Set the below to the location of the duplicate clone control file.
# Set the below for the from and to location for all data files / redo
# logs to be cloned.
db_file_name_convert =("/d05/oradata/PROD/", "/d04/oradata/TEST/")
log_file_name_convert =("/d05/oradata/PROD/", "/d04/oradata/TEST/")
#Set the below to the same as the production target
Following the creation of the initTEST.ora startup nomount the auxiliary instance.
[oracle@AIX]export ORACLE_SID=TEST

[oracle@AIX] sqlplus '/as sysdba'

SQLPLUS> startup nomount;


6. Prepare RMAN duplicate script.

In a working directory on Host B create an RMAN script file clone.rcv. The example below shows the command for a complete duplicate (figure 7b) skips the tablespaces which are not required in the duplicate andprovide the syntax required for a point in time duplicate.

run {
allocate auxiliary channel C1 device type disk;
duplicate target database to TEST;
}

7. Execute the RMAN script.

Start RMAN, connect to the production target, the catalog instance and also the TESTiliary clone. Run the RMAN duplicate script as shown below. Before doing this ensure that the Oracle SID environment variable is set to the duplicate clone database.

[oracle@AIX] export ORACLE_SID=TEST
[oracle@AIX] rman target sys/sys@PROD catalog rman/rman@rman auxiliary /
RMAN> @clone.rcv


or

run
{
allocate auxiliary channel C1 device type disk;
duplicate target database to TEST;
}

This for Apps 11i Database
After Database creation, We need to create XML file.

Step 1: perl admkappsutil.pl

$ cd $AD_TOP/bin
$ pwd
/d05/appsdev2/oracle/prodappl/ad/11.5.0/bin
$ perl admkappsutil.pl
Starting the generation of appsutil.zip
Log file located at /d05/appsdev2/oracle/prodappl/admin/log/MakeAppsUtil_08010323.log
output located at /d05/appsdev2/oracle/prodappl/admin/out/appsutil.zip

root@TEST2 # cd /d05/appsdev2/oracle/prodappl/admin/out
$ ls -lst

root@TEST2 # cp appsutil.zip /d05/oracle/9.2.3
bash-2.03$ unzip -o appsutil.zip

Archive: appsutil.zip
inflating: appsutil/java/oracle/apps/ad/autoconfig/oam/CtxSynchronizer.class

bash-2.03$ perl adbldxml.pl tier=db appsuser=apps appspasswd=apps
appsutil/java/oracle/apps/ad/autoconfig/oam/CtxSynchronizerException.class

Also the Database table GLOBAL_NAME is hosting wrong value for domain.

Enter domain name:testdm

Enter the value for Display Variable: 169.16.3.1:0.0

Context File /d05/oracle/9.2.3/appsutil/DEV_TEST2.xml
already exists.

Overwriting Context File may cause loss of existing settings, hence
backing it up as: /d05/oracle/9.2.3/appsutil/DEV_TEST2.xml.bak

The context file has been created at:
/d05/oracle/9.2.3/appsutil/DEV_TEST2.xml

The log file for this adbldxml session is located at:
/d05/oracle/9.2.3/appsutil/log/DEV_TEST2/08010337.log

bash-2.03$ pwd
/d05/oracle/9.2.3/appsutil/bin

bash-2.03$ perl adconfig.pl contextfile=/d05/oracle/9.2.3/appsutil/DEV_hostname.xml appspass=apps

AutoConfig is configuring the Database environment...
Context Value Management will now update the Context file

Updating Context file...COMPLETED

Attempting upload of Context file and templates to database...COMPLETED

Updating rdbms version in Context file to db920
Updating rdbms type in Context file to 64 bits
Configuring templates from ORACLE_HOME ...

AutoConfig completed successfully.
The log file for this session is located at: /d05/oracle/9.2.3/appsutil/log/DEV_hostname2/08011549/adconfig.log

Apps Tier Colne

bash-2.03$ pwd
cd $ORACLE_HOME/appsutil/scripts/
perl adpreclone.pl dbTier
Running Rapid Clone with command...

Configure the target system application tier server nodes
Log on to the target system as the APPLMGR user and type the following commands:
cd /clone/bin
perl adcfgclone.pl appsTier

Find Oracle Apps DBA scripts

Oracle Apps DBA can find scripts to help with the concurrent manager. The concurrent manager is the job-scheduling component of Oracle Apps:
The following SQL scripts located under $FND_TOP/sql are useful when diagnosing concurrent manager problems:

1. afimchk.sql Tells the status of the ICM and PMON method

2. afcmstat.sql Lists active manager processes

3. afrqrun.sql Lists all the running, waiting and terminating requests

4. afrqwait.sql Lists requests that are constrained and waiting for the ICM to release them

5. afrqscm.sql Prints log file name of managers that can run a given request. It can be used to check for possible errors when a request stays in pending status. It requires a request id value.

6. afcmcreq.sql Prints the log file name of the manager that processed the request

7. afrqstat.sql Summary of completed concurrent requests grouped by completion status and execution type. It requires number of days prior to today on which to report parameter.

8. afimlock.sql Lists locks that the ICM is waiting to get

9. afcmrrq.sql Lists managers that currently are running a request

CONFIGURING NFS

Among the many different file systems that FreeBSD supports is the Network File System, also known as NFS. NFS allows a system to share directories and files with others over a network. By using NFS, users and programs can access files on remote systems almost as if they were local files.

Some of the most notable benefits that NFS can provide are:

Local workstations use less disk space because commonly used data can be stored on a single machine and still remain accessible to others over the network.

There is no need for users to have separate home directories on every network machine. Home directories could be set up on the NFS server and made available throughout the network.

Storage devices such as floppy disks, CDROM drives, and Zip® drives can be used by other machines on the network. This may reduce the number of removable media drives throughout the network.

SERVER SIDE

1. UP the Network card
2. And edit the vi /etc/hosts [insert the server & client IP and hostname]
3. vi /etc/hostname.hme0
[insert the server & client IP and hostname]

STEP 1:

# vi /etc/dfs/dfstab
..
..
..
share -o ro /usr/share/man

:wq!

STEP 2:

[Note: Stop and start the NFS server daemons]

#/etc/init.d/nfs.server stop
#/etc/init.d/nfs.server start

STEP 3:
Verify the share dir:

# share
- /usr/share/man ro " "

# dfshares [it will display the current share name]

RESOURCE SERVER ACCESS TRANSPORT
server:/usr/share/man server - -

CLIENT SIDE

# mv /usr/share/man /usr/share/man.bak
Create a new man directory (/usr/share/man) to use as a mount point.

# cd /usr/share
# mkdir man
# mount server:/usr/share/man /usr/share/man
# man ls

[now man page will work, but in the client side we moved the man page as man.bak even though it will work this is because of the NFS ]

1. To verify the list of mounts in the client provide by server

Ans: # dfmounts server

RESOURCE SERVER PATHNAME CLIENTS
- server /usr/share/man client

2. To UNMOUNT

Ans: # umount /usr/share/man
# dfmounts server

output: No output


NOTE: TO CHANGE THE HOST NAME OF THE SERVER EDIT THE BELOW FILE

/etc/inet/hosts
/etc/nodename
/etc/hostname.hme0
/etc/net/ticlts/host
/etc/net/ticots/host
/etc/net/ticotsord/host
i.e. to rename the hostname