Usage |
Command line |
Called without any parameter, the scripts display a short help page concerning
their command line parameters. The minimal requirement for all scripts is the
ORACLE_SID - some scripts may need even more parameters.
More command line parameters available for all scripts and listed in the table
below. Those parameters that not apply to all scripts are described for the
affected scripts in separate sections.
|
Note that the Generate_createdb.sh script does not use the
globalconf configuration file, and the parameters/options
described in this general block do not apply to that script (except for the
ORACLE_SID parameter, which should not need any explanation).
|
Parameter | Description |
Common parameters |
-c |
alternative configuration file to use. You may create
different configurations according to e.g. special cases (e.g. for
different database instances), and use them specifying the corresponding
config file at the command line. |
-u |
User name for the SQL CONNECT statement |
-p |
Password for the SQL CONNECT statement |
-s |
Connection string for the SQL CONNECT
statement (if different from the ORACLE_SID ). This
is used for the SQL CONNECT string only, while the
ORACLE_SID is also used for the log file name. |
--force |
Force reorganization regardless of chain percentage
(tabreorg.sh ), or force the move/rebuild without the
ONLINE option in case it fails with it (tabmove.sh ,
tabreorg.sh / idxrebuild_all.sh ) |
analobj.sh and tabreorg.sh parameters |
-l |
Override the LOGALL config variable. Specify
1 (= On) or 0 (=off) as value. |
--noadjust |
Don't adjust the initial/next extent size and pctfree/pctused
(idxrebuild_all.sh and tabreorg.sh only). |
--noanalyze |
Only print the chain stats without running an analyze before
(analobj.sh only). Useful if you regularly analyze your
schemata and thus can trust the existing stats. |
tabreorg.sh parameters |
-o |
Only work on objects of this owner |
|
An example command line could look like this:
./tabreorg.sh oradb -c my.config -l 1 -o peter
|
|
Generate_createdb.sh |
The Generate_createdb.sh script can be used to
generate a create database script from the
information stored within a running instance, which can be called
"reverse engineering". If you saved these scripts after setting up your
instance, you may not need to do this -- but sometimes you miss these
scripts and want to be prepared for the "worst case".
As stated in the header of the script itself, it is in no way complete
or certified -- as it is not the recommended way to get the create scripts.
But sometimes there's no other option left, if that important information is
lost -- so Generate_createdb.sh at least gives you something to
start with. After the scripts have been created, you should check them
thoroughly for any mistakes, errors and missing information. The script does
not take care for all the latest features you may have used, but however it
should create valid scripts for Oracle versions 8i and 9i.
|
analobj.sh |
As already mentioned in the introduction, analobj.sh is intended
to anal yze obj ects in a given schema. This is
necessary to provide the Cost Based Optimizer (CBO) with necessary data for
indices and tables - but for tables it is also good to know about their
fragmentation, i.e. the count of chained or migrated rows. With too many
migrated rows, Oracle may be chasing blocks when fetching rows: while an
index tells the requesting process, the wanted table row is stored in block X,
in block X we find the information "look also in block Y" - and this only since
there was not enough free space available on an update, since the block was
already filled up.
To get rid of this ugly situation, we need to re-organize the table.
tabreorg.sh helps you to do this automatically.
analobj.sh expects at least 3 command line options:
- the
ORACLE_SID
- the schema to analyze
- the object type to analyze
Valid object types are table , index and
all , where "all" means "tables and indices". If you do the analyze
for the CBO, you should analyze "all" objects. If you are just going to identify
tables with row migration, you may want to analyze "tables" only. I have no
idea why someone may want to analyze indices only - but for the sake of
completeness I included this possibility as well.
|
idxmove.sh |
Indices are a good invention to boost performance for many queries. But the
best boost in this context can be reached by having the indices stored on a
volume (i.e. hard disk) other than the data itself, since this way we can save
even more disk IO.
But not all applications (and developers) take enough care about using
different tablespaces for data and indices respectively. Now Oracle has a
default tablespace clause in the CREATE USER and
ALTER USER statements, and since Oracle 9 a default
temporary tablespace as well - but no such thing as a default index
tablespace , unfortunately. So on a CREATE INDEX command
without explicitely defining a tablespace for it, the index will be created
within the users default tabelspace, as well as the tables etc.pp.
So that's the time of idxmove.sh : it looks in the SourceTS
whether there are any indices contained, and if so, moves them into the
TargetTS using the ALTER INDEX..MOVE command. If
possible, the move will be done with the ONLINE option, so no
running session will be affected by this process.
idxmove.sh expects at least 3 parameters:
- the
ORACLE_SID (as usual)
- the
SourceTS to look for the indices to be moved
- the
TargetTS to move the indices to
For more options, see the command line options
above.
|
idxrebuild_inv.sh |
If indices become invalid, you may notice strong performance loss for
applications that were using it due to Oracle executing Full Table Scans (FTS)
instead of finding the row id(s) via the index. In some cases, the entire
application may stop working (or at least it may seem so) and become unusable
due to its bad performance. This is an intolerable situation - but thanks to
idxrebuild_inv.sh we can overcome it quickly and easy:
idxrebuild_inv.sh queries the system views for indices that are
in unusable state and then tries to rebuild them using the ALTER
INDEX..REBUILD command. If rebuilding an index failes, this is reported
within the log file written by a line starting with an exclamation mark ("!").
idxrebuild_inv.sh requires the ORACLE_SID as first
parameter. As a second, optional parameter you may specify a tablespace to
restrict the check to. More optional command line parameters can be found
in the command line section of this document.
|
idxrebuild_all.sh |
Over time, indices can become inperformant since they loose balance due to
(frequent) updates/inserts. And as well they may use more space than needed
because of deletes. Thus to make them performant again as well as to re-gain the
waisted space, a rebuild cannot be that bad an idea.
idxrebuild_all.sh checks your database dictionary for the
existing indices with more than 1 extent in use (for the smaller ones we don't
gain much speed, and especially no space). This list will be ordered by the
space consumed by each index. Then it processes this list from top to bottom.
First it tries to REBUILD ONLINE and, if this fails, without the
online option (only if called with the --force parameter). At 3 times
during this process it will COALESCE the tablespace (if you specified
one):
- 50% of the index count or 1/3 of the size have been processed
- 75% of the index count or 2/3 of the size have been processed
- all indices have been processed
This way chances are good that the largest index may be successfully rebuild,
even if in the beginning it looked like you did not have enough space for this.
If you did not specify the --noadjust parameter,
idxrebuild_all also adjusts the storage parameters, i.e. the initial
and next extent sizes, based on the size of the index. This will reduce the
fragmentation of the storage, and should also result in better performance. The
values are stored in the globalconf configuration file, so please
see there for details.
The syntax of this script (and the logfile format) is the same as for
idxrebuild_inv.sh above.
|
lazywaste.sh |
Some users show the bad behaviour to log into the database one day, and then
leave the session idle/inactive for many days (if not even weeks or months). The
affected sessions thus waste resources you'ld prefer to use otherwise, so you
may want to kill at least some of the top wasters. To find those, this script
can be used. It gives you an overview on
- size and used space in your default temporary tablespace
- (max) allocated and used PGA
- number of lazy sessions (if any) and the resources wasted by them
- top N sessions ordered by PGA usage
- top N sessions ordered by Temp usage
Within the script file itself (at the very top) you can adjust the default
values for how long a session must be inactive to be counted "lazy", and how
many "top N" statements should be listed. The script accepts the SID of the
database to connect to as only parameter (specifying "-h" instead shows the
syntax). If called w/o parameter, the environment variable $ORACLE_SID
will be evaluated. The database connection then is established using SQL*Plus and
OS verification ("/ as sysdba ").
|
tabmove.sh |
As with Oracle 9, each tablespace can have its own block size settings: so
while having one tablespace with 8k block size, we can have another with 16k
block size for larger objects. So if we want to convert a tablespace, this may
be one reason to use tabmove.sh to get all the tables there.
Another possible reason is that tables and indices have been stored into the
same tablespace, and we just want to move the tables out here while the indices
should stay in this tablespace.
tabmove.sh expects at least 3 parameters:
- the
ORACLE_SID (as usual)
- the
SourceTS to look for the tables to be moved
- the
TargetTS to move the tables to
For more options, see the command line options
above.
|
tabreorg.sh |
If tables are fragmented (see analobj.sh ),
this can become a performance issue. To automatically remove this fragmentation,
I wrote the tabreorg.sh script. In order to get rid of the
fragmentation plus help to prevent it in the future, it takes the following
steps:
- Checking for tables where the fragmentation exceeds a given treshhold
- Adjusting
PCTFREE and PCTUSED as well as the
STORAGE (i.e. initial and next extent sizes) as specified in
the configuration file (id you did not specify the --noadjust
parameter, of course)
- Call the
ALTER TABLE..MOVE command with the evaluated
parameters
The only required parameter for tabreorg.sh
is the ORACLE_SID - more optional command line parameters can be
found in the command line section of this document.
|
undo_used.sh |
This script requires no command line parameters. It however expects the
environment parameter ORACLE_SID to be set and pointing to the
instance you want to investigate. If this is set correcty, it will list up
all undo records held by current transactions together with some relevant
session and object data.
|
|
Evaluation of results |
The only results you can see immediately come with analobj.sh :
the logfile provides you with a list of tables that exceed the configured
threshhold values and you now may consider whether to reorganize them. As for
the other scripts, just consult your logfile to see if there have been any
problems (marked by a "!" in the beginning of a line). If there is no such
line, and the script didn't crash for some reason, you should be able to tell
the result by the intended action ;)
|
|
rman.sh |
The rman.sh is a stand-alone toolkit just distributed along
with the DBAHelper archive. It does not depend on other components of this
distribution, and thus also doesn't make use of other settings than made in
its own directory. Its intention is to encourage you to use RMAN - which
sometimes frightens people because they think it's difficult to use.
Hereby we prove this is not the case.
The provided script lets you make backups, validate and crosscheck them,
cleanup obsolete, expired and even orphaned ones - and more. It's syntax
is quite easy:
./rman.sh [<command> [options]]
Starting with version v0.2.4 of the DBAHelper package, the RMAN framework
has been almost completely reworked. You will notice this immediately after
the first start of the new tool - if you have dialog installed
on your machine: Most of the work have been on a more friendly UI using
dialog , if available - or falling back to "plain shell" mode
otherwise. For interactive use, dialog is highly recommended.
For non-interactive use, such as automated backups/cleanups, I recommend to
always add the -q option at least two times (output is either
written to logfile, and not attended directly during automated runs), which
also forces the script back to "plain shell" mode.
Starting the script without any parameters will enter interactive mode (and
display a menu of actions to chose from). So far, the following commands
are available:
|
Parameter | Description |
Commands |
backup_daily |
Runs a daily backup of your database, according to the
policies you configured or took as granted. In the latter case this
means to make a full backup, and then cumulative backups every day.
When the retention limit (default: 7 days) is reached, the full
backup gets merged with the following incremental backup, so it always
reflects the point of 7 days ago - and you can recover to any point
in time during the last 7 days. |
block_recover |
Recover corrupt database blocks. You may need this when
Oracle tells you about such, i.e. when you get an ORA-1578
ORACLE data block corrupted . The script will ask you to provide
details the ORA-1578 provided. |
cleanup_expired |
Expired files are those which do not exist anymore on
disk, but are still listed in the catalog resp. control file. If the
reason is not a temporary unavailable drive, but a permanent deletion
of no-longer-needed files (tsk, you should use cleanup_obsolete
for that!), you may want to remove this obsolete information from the
records. |
cleanup_obsolete |
This will cleanup all files exceeding your configured
retention policy. Except for some cases, where Oracle bugs are affected...
The script will list the files it is about to delete (outaged ones),
and ask for your confirmation. |
create_standby |
Step-by-step guidance on creating a standby database, this
command assumes you are creating a standby database on another server,
which is identically configured concerning Oracle and its directory
structures. It gives you information on what to do, and automates some
of the necessary steps. However, some preparations are needed from your
side: Please see the article about creating a standby database on
IzzySoft.DE first and follow the preparations up to the
database status check - from there on the script will guide you. |
crosscheck |
Crosscheck files means to check the records in the catalog
resp. control file whether those files really exist physically. Those no
longer existing will be marked 'expired'. Take care that all your drives
are available when running this. |
move_fra |
If your current recovery area runs out of disk space, you
may want to relocate it to a different disk without losing its content.
Using the move_fra command of rman.sh , this
will be done automatically for you: You will be asked to confirm the
current FRA location, enter a new one, confirm the ORACLE_SID -
and the script will create the new directory (if necessary), alter your
database to use the new location, move all files to there, and run RMAN
to update its catalog/controlfile accordingly. |
force_clean |
This does a thoroughly check for those orphans and obsolete
files oracle forgot about. Use this carefully - and read more about it
in the doc/readme.rman file before using this command. |
recover |
Wow, it crashed. What now? Recover, of course. This command
tries to recover the database without restoring a backup. If it fails, continue
with one of the restore commands. |
restore_ctl |
Restore (a) lost controlfile(s) - if possible, from a
multiplexed copy; otherwise from backup. |
restore_full |
Run a complete restore for all of the database files. This is
the last resort if nothing else works, and will shutdown your database
instance first. Then it restores all files from backup and runs a recover. |
restore_ts |
Restore a tablespace. If only one or a few tablespace(s)
are broken (and the SYSTEM tablespace does not belong to
that list), you can leave the database up and running and only restore
what's needed. It will ask you for some details. |
restore_temp |
Maybe misleading, since it never was backed up: But yes,
if your temporary tablespace got lost, we can recover from that too. |
switchover |
Maybe your Standby and Primary databases should switch their
roles - e.g. due to some maintenance tasks? This command will do it. |
validate |
Nice to know you have a backup. But better to check if it's
not broken, right? So you can use this command to validate your backups. |
Options |
-c <configfile> |
alternative configuration file to use. You may create
different configurations according to e.g. special cases (e.g. for
different database instances), and use them specifying the corresponding
config file at the command line. |
-h , --help , -? |
Display help message and exit |
-l <logfile> |
Use an alternative logfile. To configure the logging
directory, see inside the script itself for now. |
-p <password> |
Password to use to connect to the target database. |
-q |
Be quiet - don't produce so much output. Can be specified
multiple times: -q redirects STDOUT from called progs to
/dev/null , -q -q additionally redirects
STDERR (plus forces the script to fallback to "plain shell" mode
instead of using dialog , which is useful for automated
scripts), and -q -q -q even suppresses most of the "background
information" (except for the questions asked) the script itself produces. |
-r <catalog specification> |
complete connection description for the recovery catalog
to use (if any). This includes username, password and connection (if needed),
e.g. user/pass@remote . |
-u <username> |
User to connect to the target database |
--all |
Only affects the backup_daily and
cleanup_obsolete commands: Run the backup/cleanup for all
databases. This requires a configuration file for each database which
should be handled by this: Take the rman.conf file as a
template, and copy (or symlink) it to rman_$ORACLE_SID.conf
in the same directory. rman.sh backup_daily --all will check
for all files named rman_*.conf , set the ORACLE_SID
to what the * expands to, and run the backup. If no
rman_*.conf file is found, it will run as if the --all
option was not given (i.e. run the backup for the currently set
ORACLE_SID only). |
--dryrun |
Don't run commands that would do changes, just display
what would be done |
--force-configure |
By default, rman.sh runs the rman.conf
the first time you called it - and afterwards touches a reference file,
~/.rman_configured . On consecutive executions, it compares
the timestamp and only runs rman.conf again if it is newer
than the reference file. With this parameter you can enforce a config run. |
--[no]dialog |
Overrides the USEDIALOG setting of the
rmanrc file to use (--dialog ) or not to use
(--nodialog ) the more user friendly dialog
UI if available (and not rendered inactive by multiple -q ). |
--yestoall |
Assume 'Yes' as answer to all questions. This is targeted
for use in cron jobs once you tested what the command to execute does.
Normally, the script asks for your confirmation before deleting/purging
anything - these questions are suppressed by this option, so use with care. |
|
|
ExportOracleDB |
ExportOracleDB is another stand-alone script. For now, its
only intention is to make a fast full export of a given database and
optionally compress it. In the future, it may be extended for some more
options.
Basically it does nothing else than you could do also with a "parfile":
It runs EXP with some pre-configured values (which you can override on the
command line), and then optionally compresses the resulting dump file.
Decide yourself which way seems better to you - you are free to use the
one or the other.
Still here? Hm, looks like you want to give the script a try. So here is
its syntax:
./ExportOracleDB [options]
where the options are:
|
Option | Description |
-b <buffer_size> |
Use a different size for BUFFER. Values must be integer. |
-c <Consistency> |
Different value for CONSISTENT. Only values: Y or N |
-d <Directory> |
Directory where the export (and log) files go to. Must
point to an existing directory, or the script will fail. |
-f <file> |
Name of the resulting dump file (will be appended to
the directory configured or passed on the command line) |
-h, --help |
Display help screen |
-l <Logfile> |
alternative name for the log file to be created (will
be appended to the directory specified) |
-o <ORACLE_SID> |
alternative ORACLE_SID (to the one in the environment)
to specify the database to export from |
-p <comPress> |
Alternate setting for COMPRESS. While EXPs default is
"Y" (use segment compression), the script defaults to "N". Which
you can of course change in the script itself. |
-s <Statistics> |
Value for the STATISTICS parameter. While defaulting to
"estimate" in the EXP utility, the script defaults to "none",
switching the statistics off - in most cases they have not been
useful, and since export/import implicitely optimizes your data,
you would either need to re-generate them... |
-u <User/Pass> |
Override the ORACLE_USERID (if it's set at all). If
there's no User/Password specified, EXP will ask for it on the
command line prompt. |
-z <ZipCmd> |
compression program to use. At the moment, the script
only understands "gzip", "bzip2" or "none" - the latter switches
off compression |
--dryrun |
Just print out what would be done - but don't really
execute anything. |
|
Exit Codes |
ExportOracleDB defines a couple of exit codes you can check
when running it from other scripts. The basic rule is: As long as the
exit code is less than 10, nothing is in danger (i.e. the export was done,
just some other weird things appeared). Details are listed in the following
table:
|
Exit Code | Description |
0 |
Everything is very fine |
1 |
Could not compress the dump file - the specified
compression utility was not found |
2 |
Could not compress the dump file - the compress command
threw an error |
11 |
There was no ORACLE_SID specified, so no export could be done |
12 |
The specified directory to save the files to does not exist
(again, no backup done) |
13 |
The EXP utility was not found in the PATH - no export done |
14 |
The EXP utility threw an error - the dump file may have errors
(no file compression done after that error occured) |
|