Oracle 10g / 11g Database Administration
May 17
2012
An Oracle database is a collection of data treated as a unit. The purpose of a database is to store and retrieve related information. A database server is the key to solving the problems of information management. In general, a server reliably manages a large amount of data in a multiuser environment so that many users can concurrently access the same data. All this is accomplished while delivering high performance. A database server also prevents unauthorized access and provides efficient solutions for failure recovery.
Sai DBA
2
Oracle Database Administration Contents Sl No 1
2
contents
Page No's
Introduction to Oracle Database
6
Overview of Oracle Grid Architecture
6
Difference between a cluster and a grid
7
Responsibilities of Database Administrators
7
Creating the Database
7
Managing Tablespaces and Datafiles
11
Creating New Tablespaces
11
Bigfile Tablespaces (Introduced in Oracle Ver. 10g)
12
To Extend the Size of a tablespace
12
To decrease the size of a tablespace
13
Coalescing Tablespaces
13
Taking tablespaces Offline or Online
13
Making a Tablespace Read only.
14
Renaming Tablespaces
14
Dropping Tablespaces
14
Temporary Tablespace
15
Increasing or Decreasing the size of a Temporary Tablespace
15
Tablespace Groups
15
Creating a Temporary Tablespace Group
16
Assigning a Tablespace Tablespace Group as the Default Temporary Temporary Tablespace
16
Diagnosing and Repairing Locally Managed Tablespace Problems
16
Scenario 1: Fixing Bitmap When Allocated Blocks are Marked Free (No Overlap)
17
Scenario 2: Dropping a Corrupted Segment
18
Scenario 3: Fixing Bitmap Where Overlap is Reported
18
Scenario 4: Correcting Media Corruption of Bitmap Blocks
18
Scenario 5: Migrating from a Dictionary-Managed to a Locally Managed Tablespace
18
Transporting Tablespaces
19
Procedure for transporting tablespaces
20
Transporting Tablespace Example
21
Viewing Information about Tablespaces and Datafiles
24
Relocating or Renaming Datafiles
24
Renaming or Relocating Datafiles belonging to a Single Tablespace
24
Procedure for Renaming and Relocating Datafiles in Multiple Tablespaces
25
3
3
4
5
Managing REDO LOGFILES Adding a New Redo Logfile Logfile Group
27
Adding Members to an existing existing group
27
Dropping Members from a group
27
Dropping Logfile Group
28
Resizing Logfiles
28
Renaming or Relocating Logfiles
28
Clearing REDO LOGFILES
29
Viewing Information About Logfiles
29
Managing Control Files
30
Multiplexing Control File
30
Changing the Name of a Database
31
Creating A New Control File
31
Cloning an Oracle Database
32
Managing the UNDO TABLESPACE
36
Switching to Automatic Management of Undo Space
36
Calculating the Space Requirements For Undo Retention
36
Altering UNDO Tablespace Tablespace
6
7
27
37
Dropping an Undo Tablespace
37
Switching Undo Tablespaces
37
Viewing Information about Undo Tablespace
38
SQL Loader
39
CASE STUDY (Loading Data from MS-ACCESS to Oracle)
39
CASE STUDY (Loading Data from Fixed Length file into Oracle)
41
Loading Data into Multiple Tables using WHEN condition
42
Conventional Path Load and Direct Path Load
43
Direct Path
44
Restrictions on Using Direct Path Loads
44
Export and Import
45
Invoking Export and Import
45
Command Line Parameters of Export tool
45
Example of Exporting Full Database
47
Example of Exporting Schemas
47
4
8
9
10
Exporting Individual Tables
47
Exporting Consistent Image of the tables
48
Using Import Utility
49
Example Importing Individual Tables
49
Example, Importing Tables of One User account into another User account
49
Example Importing Tables Using Pattern Matching
49
Migrating a Database across platforms
49
DATA PUMP Utility
51
Using Data Pump Export Utility
51
Example of Exporting a Full Database
51
Example of Exporting a Schema
52
Exporting Individual Tables using Data Pump Export
52
Excluding and Including Objects during Export
53
Using Query to Filter Rows during Export
53
Suspending and Resuming Export Jobs (Attaching and Re-Attaching to the Jobs)
53
Data Pump Import Utility
55
Importing Full Dump File
55
Importing Objects of One Schema to another Schema
55
Loading Objects of one Tablespace to another Tablespace
55
Generating SQL File containing DDL commands using Data Pump I mport
55
Importing objects of only a Particular Schema
56
Example
56
Importing Only Particular Tables
56
Running Import Utility in Interactive Mode
56
Flash Back Features
58
Flashback Query
58
Using Flashback Version Query
58
Using Flashback Table to return Table to Past States
60
Purging Objects from Recycle Bin
61
Flashback Drop of Multiple Objects With the t he Same Original Name
62
Flashback Database: Alternative to Point-In-Time Recovery
63
Enabling Flash Back Database
63
To how much size we should set the flash recovery area
63
How far you can flashback database
64
Example: Flashing Back Database to a point in time
64
5
11
Log Miner
66
LogMiner Configuration
66
LogMiner Dictionary Options
66
Using the Online Catalog
67
Extracting a LogMiner Dictionary to the Redo Log Files
67
Extracting the LogMiner Dictionary to a Flat File
67
Redo Log File Options
68
Automatically
12
68
Manually
68
Example: Finding All Modifications in the Current Redo Log File
68
Example of Mining Without Specifying the List of Redo Log Files Explicitly
70
Example : Mining Redo Log Files in a Given Time Range
70
BACKUP AND RECOVERY
75
Opening the Database in Archivelog Mode
75
Bringing the Database again in NoArchiveLog mode
75
Taking Offline (COLD) Backups
76
Taking Online (HOT) Backups
76
Recovering from the Loss of a Datafile
76
6
Introduction to Oracle Oracle Database An Oracle database is a collection of data treated as a unit. The purpose of a database is to store and retrieve related information. A database server is the key to solving the problems of information management. In general, a server reliably manages a large amount of data in a multiuser environment so that many users can concurrently access the same data. All this is accomplished while delivering high performance. A database server also prevents unauthorized access and provides efficient solutions for failure recovery. Oracle Database is the first database designed for enterprise grid computing, the most flexible and cost effective way to manage information and applications. Enterprise grid computing creates large pools of industry-standard, modular storage and servers. With this architecture, each new system can be rapidly provisioned from the pool of components. There is no need for peak workloads, because capacity can be easily added or reallocated from the resource pools as needed. The database has logical structures and physical structures . Because the physical and logical structures are separate, the physical storage of data can be managed without affecting the access to logical storage structures.
Overview of Oracle Grid Architecture The Oracle grid architecture pools large numbers of servers, storage, and networks into a flexible, on-demand computing resource for enterprise computing needs. The grid computing infrastructure continually analyzes demand for resources and adjusts supply accordingly.
For example, you could run different applications on a grid of several linked database servers. When reports are due at the end of the month, the database administrator could automatically provision more servers to that application to handle the increased demand. Grid computing uses sophisticated workload management that makes it possible for applications to share resources across many servers. Data processing capacity can be added or removed on demand, and resources within a location can be dynamically provisioned. Web services can quickly integrate applications to create new business processes.
Difference between between a cluster and a grid Clustering is one technology used to create a grid infrastructure. Simple clusters have static resources for specific applications by specific owners. Grids, which can consist of multiple clusters, are dynamic resource pools shareable among many different applications and users. A grid does not assume that all servers in the grid are running the same set of applications. Applications can be scheduled and migrated across servers in the grid. Grids share resources from and among independent system owners. At the highest level, the idea of grid computing is computing as a utility. In other words, you should not care where your data resides, or what computer processes your request. You should be able to request information or computation and have it delivered - as much as you want, and whenever you want. This is analogous to the way electric utilities work, in that you don't know where the generator is, or how the electric grid is wired, you just ask for electricity, and you get it. The goal is to make computing a utility, a commodity, and ubiquitous. Hence the name, The Grid. This view of utility computing is, of course, a "client side" view. From the "server side", or behind the scenes, the grid is about resource allocation, information sharing, and high availability. Resource allocation ensures that all those that need or request resources are getting what they need, that resources are not standing idle while requests are going unserviced. Information sharing makes sure that the information users and applications need is available where and when it is needed. High availability features guarantee all the data and computation is always there, just like a utility company always provides electric power.
7
Responsibilities of Database Database Administrators Administrators Each database requires at least one database administrator (DBA). An Oracle Database system can be large and can have many users. Therefore, database administration is sometimes not a one-person job, but a job for a group of DBAs who share responsibility. A database administrator's responsibilities can include the following tasks: • • •
• • • • • • • • • •
Installing and upgrading the Oracle Database server and application tools Allocating system storage and planning future storage storage requirements for the database system Creating primary database storage structures (tablespaces) after application developers have designed an application Creating primary primary objects (tables, views, indexes) once application developers have designed an application Modifying the database structure, as necessary, from information given by application developers Enrolling users and maintaining system security Ensuring compliance with Oracle license agreements Controlling and monitoring user access to the database Monitoring and optimizing the performance of the database Planning for backup and recovery of database information Maintaining archived data on tape Backing up and restoring the database Contacting Oracle for technical support
Creating the Database This section presents the steps involved when you create a database manually. These steps should be followed in the order presented. Before you create the database make sure you have done the planning about the size of the database, number of tablespaces and redo log files you want in the database. Regarding the size of the database you have to first find out how many tables are going to be created in the database and how much space they will be occupying for the next 1 year or 2. The best thing is to start with some specific size and later on adjust the size depending upon the requirement Plan the layout of the underlying operating system files your database will comprise. Proper distribution of files can improve database performance dramatically by distributing the I/O during file access. You can distribute I/O in several ways when you install Oracle software and create your database. For example, you can place redo log files on separate disks or use striping. You can situate datafiles to reduce contention. And you can control data density (number of rows to a data block). Select the standard database block size. This is specified at database creation by the DB_BLO B_BL OCK_SI K_ SI ZE initialization parameter and cannot be changed after the database is created. For databases, block size of 4K or 8K is widely used Before you start creating the Database it is best to write down the specification and then proceed
y_ i ca_ db The examples shown in these steps create an example database my_i y_ i ca_db ca_ db with the following specification Let us create a database my_i Database name and System Identifier
SI D DB_N B_ NAME
= =
myi c adb myi c adb adb
TABLESPACES (we will have 5 tablespaces in this database. With 1 datafile in each tablespace)
Tabl espace Name Dat af i l e
Si z e
8
syst syst em / u01/ or acl e/ or adat a/ myi ca/ ca/ sys. sys. dbf 250M user ser s / u01/ or acl e/ or adat a/ myi ca/ ca/ usr . dbf 100M undot bs / u01/ or acl e/ or adat adat a/ myi ca/ undo. dbf 100M t emp / u01/ or acl e/ or adat adat a/ myi ca/ t emp. dbf 100M i ndex_da x_dat a / u01/ or acl e/ or adat a/ myi ca/ ca/ i ndx. dbf 100M sysau sysaux / u01/ or acl e/ or adat a/ myi ca/ ca/ sysau sysaux. dbf 100M LOGFILES (we will have 2 log groups in the database)
Logfile Group Member Gr oup 1 / u01/ or acl e/ or adat a/ myi ca/ ca/ l og1. or a Gr oup 2 / u01/ or acl e/ or adat a/ myi ca/ ca/ l og2. or a
CONTROL FI LE PAR PA RAMETER TER FI LE
Size 50M 50M
/ u01/ or acl e/ or adat a/ myi ca/ ca/ con cont r ol . or a / u01/ or acl e/ dbs/ i ni t myi cadb cadb. or a
RACLE_ HOME/ dbs (rememer the parameter file name should of the format i ni t . or a and it should be in ORACLE_ directory in Unix o/s and ORACLE_ RACLE_H HOME/ dat abase abas e directory in windows o/s) Now let us start creating the database. Step 1: Login to oracle account and make directories for your database.
$mkdi kdi $mkdi kdi $mkdi kdi $mkdi kdi
r r r r
/ u01/ or acl acl e/ or adat a/ myi ca / u01/ or acl e/ or adat adat a/ myi ca/ bdum dump / u01/ or acl e/ or adat adat a/ myi ca/ udum dump / u01/ or acl e/ or adat adat a/ myi ca/ cdum cdump
Step 2: Create the parameter file by copying the default template (i ni t . or a) and set the required parameters
$cd / u01/ or acl e/ dbs $cp i ni t . or a i ni t myi cad cadb. or a Now open the parameter file and set the following parameters
$vi i ni t myi cad cadb. or a DB_N B_ NAME=myi c adb adb DB_BLO B_ BLOCK CK__ SI ZE=8192 CONTRO TROL_FI LES LES=/ u01/ or acl e/ oradat oradat a/ myi ca/ ca/ con cont r ol . or a BACKG ACKGROUND_ DUMP_D P_ DEST=/ u01/ u01/ or acl e/ or adat a/ myi ca/ bdump USER_ SER_ DUMP_D P_ DEST= EST=/ u01/ u01/ or acl e/ or adat adat a/ myi ca/ udum udump CORE_DU E_DUMP_D P_ DEST= EST=/ u01/ u01/ or acl e/ or adat adat a/ myi ca/ cdump UNDO_ TABLESPACE= TABLE SPACE=undot undo t bs UNDO_M O_MANAGEME ANAGEMENT=AUTO After entering the above parameters save the file by pressing Esc : wq Step 3: Now set ORACLE_SID environment variable and start the instance.
$expor t ORACLE_ SI D=myi cadb $sql sql pl us Ent er User User : / as sysdb sysdba SQL> SQL>s s t ar t up nomount Step 4: Give the create database command Here I am not specfying optional setting such as language, characterset etc. For these settings oracle will use the default values. I am giving the barest command to create the database to keep it simple. The command to create the database is
9
SQL>cr L>cr eat e dat abas abas e myi cadb dat af i l e ‘ / u01/ or acl e/ or adat a/ myi ca/ ca/ sys. sys. dbf ’ si ze 250M sysa sysau ux da dat af i l e ‘ / u01/ or acl e/ or adat a/ myi ca/ ca/ sysa sysau ux. dbf ’ si ze 10 100m undo undo t abl abl espace espace undo undott bs dat af i l e ‘ / u01/ or acl e/ or adat a/ myi ca/ ca/ undo. dbf ’ si ze 100m def aul aul t t empor ar y t abl abl espace espace t emp t empf i l e ‘ / u01/ or acl e/ or adat a/ myi ca/ ca/ t mp. dbf ’ si ze 10 100m l ogf i l e gr oup 1 ‘ / u01/ or acl e/ or adat a/ myi ca/ ca/ l og1. or a’ si ze 50 50m, gr oup 2 ‘ / u01/ or acl e/ or adat a/ myi ca/ ca/ l og2. or a’ si ze 50 50m; After the command fishes you will get the following message
Dat abase abase cr eat eat ed. ed. If you are getting any errors then see accompanying messages. If no accompanying messages are shown then you have to see the alert_myicadb.log file located in BACKG BACKGROUN ROUND_ D_DU DUMP_ DEST ES T directory, which will show the exact reason why the command has failed. After you have rectified the error please delete all created files in / u01/ or acl e/ or adat a/ myi ca directory and again give the above command. Step 5: After the above command finishes, the database will get mounted and opened. Now create additional tablespaces To create USERS tablespace SQL>cr eat eat e t abl abl espace espace user user s dat af i l e ‘ / u01/ or acl e/ or adat a/ myi ca/ ca/ usr . dbf ’ si ze 100M; To create I NDEX_D EX_ DATA tablespace SQL>cr eat eat e t abl abl espace espace i ndex_da dex_datt a dat af i l e ‘ / u01/ or acl e/ or adat a/ myi ca/ ca/ i ndx. dbf ’ si ze 10 100M Step 6: To populate the database with data dictionaries and to install procedural options execute the following scripts
CATALO OG. SQL SQL script to install data dictionaries First execute the CATAL SQL>@/ u01/ or acl e/ r dbm dbms/ adm admi n/ cat al og. og. sql CATPRO ROC. SQL SQL script to The above script will take several minutes. After the above script is finished run the CATP install procedural option. SQL>@/ u01 u01/ or acl e/ r dbm dbms/ adm admi n/ cat pr oc. sql This script will also take several minutes to complete. Step 7: Now change the passwords for SYS and SYSTEMaccount, since the default passwords chan chang ge_on e_on_i nst al l and manag anager er are known by everybody.
SQL>al t er user user sys i dent i f i ed by myi ca; ca; SQL>al t er user ser syst em i dent i f i ed by myi ca; ca; Step 8: Create Additional user accounts. You can create as many user account as you like. Let us create the popular account SCOTT.
SQL>cr eat e user user scot t def aul aul t t abl abl espace space user user s i dent i f i ed by ti ger quot a 10M on user ser s; SQL>gr ant ant conne connect ct t o scot t ; ener . or a file and restart the listener process. Step 9: Add this database SI D in l i s t en
10
$cd / u01/ oracl e/ net work/ adm admi n $vi l i s t en ener . or a (This file will already contain sample entries. Copy and paste one sample entry and edit the SI D setting)
LI STEN STENER = ( DESC ESCRI PTI PTI ON_L I ST = ( DESCR ESCRI PTI ON = ( ADDRESS =( PROTOCO TOCOL = TCP) ( HOST=200. 200. 100. 1) ( PORT PORT = 1521) )
) ) SI D_LI ST_LI STEN STENER = ( SI D_LI ST = ( SI D_ DESC = ( SI D_ NAME = PLSExt Pr oc) ( ORACLE_ ACLE_ HOME =/ u01/ u01/ or acl e) ( PRO PROGRAM = ext pr oc) ) ( SI D_ DESC = ( SI D_ NAME=ORCL RCL ) ( ORACLE_ HOME=/ u01/ u01/ or acl e) ) #Add t he hese se l i ne nes s ( SI D_ DESC = ( SI D_ NAME=myi c adb) ( ORACLE_ HOME=/ u01/ u01/ or acl e) ) ) Save the file by pressing Esc : wq Now restart the listener process.
$l s nr nr c t l s t op op $l s nr nr c t l s t ar ar t Step 10: It is recommended to take a full database backup after you just created the database. How to take backup is deal in the Backup and Recovery Section.
Congratualtions you have just created an oracle database.
11
Managing Tablespaces Tablespaces and Datafiles Datafiles Using multiple tablespaces provides several Advantages •
•
• •
Separate user data from data dictionary data to reduce contention among dictionary objects and schema objects for the same datafiles. Separate data of one application from the data of another to prevent multiple applications from being affected if a tablespace must be taken offline. Store different the datafiles of different tablespaces on different disk drives to reduce I/O contention. Take individual tablespaces offline while others remain online, providing better overall availability.
Creating New Tablespaces You can create Locally Managed or Dictionary Managed Tablespaces. In prior versions of Oracle only Dictionary managed Tablespaces were available but from Oracle ver. 8i you can also create Locally managed tablespaces. The advantages of locally managed tablespaces are Locally managed tablespaces track all extent information in the tablespace itself by using bitmaps, resulting in the following benefits: •
•
Concurrency and speed of space operations is improved, because space allocations and deallocations modify locally managed resources (bitmaps stored in header files) rather than requiring centrally managed resources such as enqueues Performance is improved, because recursive operations that are sometimes s ometimes required during dictionarymanaged space allocation are eliminated
To create a locally managed tablespace give the following command
SQL> CREATE TAB TABLESPA LESPAC CE i ca_l mt s DA DATAFI TAFI LE ' / u02/ or acl e/ i ca/ i ca01 ca01. dbf ' SI ZE 50 50M EXTENT EXTENT MANAG ANAGEM EMENT L OCAL AUTOAL AUTOALL L OCATE; CATE;
AUTOALLOCATE causes the tablespace to be system managed with a minimum extent size of 64K. The alternative to AUTOALLOCATE is UNIFORM. which specifies that the tablespace is managed with extents of uniform size. You can specify that size in the SIZE clause of UNIFORM. If you omit SIZE, then the default size is 1M. The following example creates a Locally managed tablespace with uniform extent size of 256K
SQL> CREATE TAB TABLE LES SPA PAC CE i ca_l mt DATAFI TAFI LE ' / u02/ or acl e/ i ca/ i ca01 ca01. dbf ' SI ZE 50M EXTENT EXTENT MANAG ANAGEMENT ENT L OCAL CAL UNI FORM SI ZE 256K; To Create Dictionary Managed Tablespace
SQL> CREATE TAB TABLE LES SPA PAC CE i ca_l mt DATAFI TAFI LE ' / u02/ or acl e/ i ca/ i ca01 ca01. dbf ' SI ZE 50M EXTENT EXTENT MANAG ANAGEM EMENT DI DI CTI ONARY;
12
Bigfile Tablespaces (Introduced in Oracle Ver. 10g) A bigfile tablespace is a tablespace with a single, but very large (up to 4G blocks) datafile. Traditional smallfile tablespaces, in contrast, can contain multiple datafiles, but the files cannot be as large. Bigfile tablespaces can reduce the number of datafiles needed for a database. To create a bigfile tablespace give the following command SQL> CREATE EATE BI BI GFI LE TAB TABLESPAC LESPACE i ca_ bi gt bs DATAFI LE ' / u02/ or acl e/ i ca/ ca/ bi gt bs01 s01. dbf ' SI ZE 50G;
To Extend the Size of a tablespace Option 1 You can extend the size of a tablespace by increasing the size of an existing datafile by typing the following command
SQL> al t er
dat abase i ca dat af i l e ‘ / u01/ or acl e/ dat a/ i cat cat bs01 s01. dbf ’ r esi ze 100M;
This will increase the size from 50M to 100M Option 2 You can also extend the size of a tablespace by adding a new datafile to a tablespace. This is useful if the size of existing datafile is reached o/s file size limit or the drive where the file is existing does not have free space. To add a new datafile to an existing tablespace give the following command.
SQL> al t er t abl espa space add dat af i l e
‘ / u02/ or acl e/ i ca/ ca/ i cat cat bs02 s02. dbf ’ si ze 50M;
Option 3 You can also use auto extend feature of datafile. In this, Oracle will automatically increase the size of a dataf ile whenever space is required. You can specify by how much size the file should increase and Maximum size to which it should extend. To make a existing datafile auto extendable give the following command
SQL> al t er dat abase da dat af i l e ‘ / u01/ or acl e/ i ca/ ca/ i cat cat bs01 s01. dbf ’ aut o ex ext end ON next 5M maxsi z e 500M 500M; You can also make a datafile auto extendable while creating a new tablespace itself by giving the following command.
SQL> cr eat e t abl espa space i ca dat af i l e ‘ / u01/ or acl e/ i ca/ ca/ i cat cat bs01 s01. dbf ’ si ze 50M aut o ext end ON next 5M maxsi z e 500M 500M;
13
To decrease the size of a tablespace You can decrease the size of tablespace by decreasing the datafile associated with it. You decrease a datafile only up to size of empty space in it. To decrease the size of a datafile give the following command
SQL> al t er dat abase dat af i l e ‘ / u01/ or acl e/ i ca/ ca/ i cat cat bs01 s01. dbf ’
r esi ze 30M;
Coalescing Tablespaces A free extent in a dictionary-managed tablespace is made up of a collection of contiguous free blocks. When allocating new extents to a tablespace segment, the database uses the free extent closest in size to the required extent. In some cases, when segments are dropped, their extents are deallocated and marked as free, but adjacent free extents are not immediately recombined into larger free extents. The result is fragmentation that makes allocation of larger extents more difficult. You should often use the ALTER TAB TABLESPACE LESPACE . . . COALESCE statement to manually coalesce any adjacent a djacent free extents. To Coalesce a tablespace give the following f ollowing command SQL> alter tablespace ica coalesce;
Taking tablespaces Offline or Online You can take an online tablespace offline so that it is temporarily unavailable for general use. The rest of the database remains open and available for users to access data. Conversely, you can bring an offline tablespace online to make the schema objects within the t he tablespace available to database users. The database must be open to alter the availability of a tablespace. To alter the availability of a tablespace, use the ALTER TABLESPACE statement. You must have the ALTER TABLESPACE or MANAGE TABLESPACE system privilege. To Take a Tablespace Offline give the following command
SQL>al t er t abl espa space i ca of f l i ne; To again bring it back online give the following command.
SQL>al t er t abl espa space i ca onl i ne; To take individual datafile offline type the following command
14
SQL>al t er dat abase dat af i l e ‘ / u01/ or acl e/ i ca/ ca/ i ca_t ca_t bs01 s01. dbf ’ of f l i ne; Again to bring it back online give the following command
SQL> al t er dat abase dat af i l e ‘ / u01/ or acl e/ i ca/ ca/ i ca_t ca_t bs01 s01. dbf ’ onl i ne; Note: You can’t take individual datafiles offline it the database is running in NOARCHIVELOG mode. mode. If the datafile has become corrupt or missing when the database is running in NOARCHIVELOG mode then you can only drop it by giving the following command
SQL>al t er dat dat abase dat af i l e ‘ / u01/ or acl e/ i ca/ ca/ i ca_t ca_t bs01 s01. dbf ’ of f l i ne f or dr dr op;
Making a Tablespace Read only. Making a tablespace read-only prevents write operations on the datafiles in the tablespace. The primary pri mary purpose of read-only tablespaces is to eliminate the need to perform backup and recovery of large, static portions of a database. Read-only tablespaces also provide a way to protecting historical data so that users cannot modify it. Making a tablespace read-only prevents updates on all tables in the tablespace, regardless of a user's update privilege level. To make a tablespace read only
SQL>al t er t abl abl espace space i ca r ead onl onl y Again to make it read write
SQL>al t er t abl espa space i ca r ead wr i t e;
Renaming Tablespaces Using the RENAME TO clause of the ALTER TABLESPACE, you can rename a permanent or temporary tablespace. For example, the following statement renames the users tablespace:
ALTER TAB TABLESPACE LESPACE user s RENAM ENAME TO user s t s ;
The following affect the operation of this statement: • •
•
The COMPATIBLE parameter must be set to 10.0 or higher. If the tablespace being renamed is the SYSTEM tablespace or the SYSAUX tablespace, then it will not be renamed and an error is raised. If any datafile in the tablespace is offline, or if the tablespace is offline, off line, then the tablespace is not renamed and an error is raised.
Dropping Tablespaces You can drop a tablespace and its contents (the segments contained in the tablespace) from the database if the tablespace and its contents are no longer required. You must have the DROP TABLESPACE system privilege to drop a tablespace.
15
Caution: Once a tablespace has been dropped, the data in the tablespace is not recoverable. Therefore, make sure that all data contained in a tablespace to be dropped will not be required in the future. Also, immediately before and after dropping a tablespace from a database, back up the database completely To drop a tablespace give the following command.
SQL> dr op t abl abl espace espace i ca; This will drop the tablespace only if it is empty. If it is not empty and if you want to drop it anyhow then add the following keyword
SQL>dr op t abl abl espace space i ca i ncl udi ng cont cont ent s; This will drop the tablespace even if it is not empty. But the datafiles will not be deleted you have to use operating system command to delete the files. But If you include datafiles keyword then, the associated datafiles will also be deleted from the disk.
SQL>dr op t abl espa space i ca i ncl udi ng con cont ent s an and dat af i l es; Temporary Tablespace Temporary tablespace is used for sorting large lar ge tables. Every database should have one temporary tablespace. To create temporary tablespace give the following command.
SQL>cr eat e t empor ar y t abl abl espace space t emp t empf i l e ‘ / u01/ or acl e/ dat a/ i ca_t emp. dbf ’ si ze 100M ext ext ent ent manag anagem ement ent l ocal ocal uni f or m si ze 5M; The extent management clause is optional for temporary tablespaces because all temporary tablespaces are created with locally managed extents of of a uniform size. The AUTOALLOCATE clause is not allowed for temporary tablespaces.
Increasing or Decreasing the size of a Temporary Tablespace You can use the resize clause to increase or decrease the size of o f a temporary tablespace. The following statement resizes a temporary file:
SQL>ALTER DATAB TABASE TEM TEMPFI LE ' / u02/ u02/ or acl e/ dat dat a/ l mt emp02. p02. dbf dbf ' RESI ESI ZE 18M 18M;
The following statement drops a temporary file and deletes the operating system file:
SQL> ALTER DATAB TABASE TEMPFI LE ' / u02/ u02/ or acl e/ dat a/ l mt emp02. p02. dbf dbf ' DROP DATAFI TAFI LES;
I NCLU LUD DI NG
Tablespace Groups A tablespace group enables a user to consume temporary space from multiple tablespaces. A tablespace group has the following characteristics:
16 •
• •
It contains at least one tablespace. There is no explicit limit on the maximum number of tablespaces that are contained in a group. It shares the namespace of tablespaces, so its name cannot be the same as any tablespace. You can specify a tablespace group name wherever a tablespace name would appear when y ou assign a default temporary tablespace for the database or a temporary tablespace for a user.
You do not explicitly create a tablespace group. Rather, it is created implicitly when you assign the first temporary tablespace to the group. The group is deleted when the last temporary tablespace it contains is removed from it. Using a tablespace group, rather than a single temporary tablespace, can alleviate problems caused where one tablespace is inadequate to hold the results of a sort, particularly on a table that has many partitions. A tablespace group enables parallel execution servers in a single parallel operation to use multiple temporary tablespaces. The view DBA_TABLESPACE_GROUPS lists tablespace groups and their member tablespaces.
Creating a Temporary Tablespace Group You create a tablespace group implicitly when you include the TAB TABLESPAC LESPACE GROUP clause in the CREATE TEMPOR PORARY TAB TABLESPAC LESPACE or ALTER TABLESPACE statement and the specified tablespace group does not currently exist. For example, if neither gr oup1 oup1 nor gr oup2 oup2 exists, then the following statements create those groups, each of which has only the specified tablespace as a member:
CREATE EATE TEM TEMPORARY TAB TABLESPA LESPAC CE i ca_t emp2 TEM TEMPFI LE ' / u02/ u02/ or acl e/ i ca/ i ca_t emp. dbf dbf ' SI ZE 50M 50M TABLESPACE GROUP gr oup1; ALTER TABLESPAC TABLESPACE E i ca_ ca _ t emp2 TABL TABL ESPACE ESPACE GRO GROUP gr oup2;
Assigning a Tablespace Group as the Default Temporary Tablespace Tablespace Use the ALTER DA DATAB TABASE . . . DEFAULT EFAULT TEM TEMPOR PORARY TAB TABLESPAC LESPACE statement to assign a tablespace group as the default temporary tablespace for the database. For example:
ALTER DATA DATABA BASE SE s ampl e DEFAU DEFAUL L T TEM T EMPORA PORARY RY TABL TABL ESPACE gr oup2;
Diagnosing and Repairing Locally Managed Tablespace Problems To diagnose and repair corruptions in Locally Managed Tablespaces Oracle has supplied a package called DBMS_SPACE_ADMIN. This package has many procedures described below:
Procedure
Description
SEGM SEGMENT_ ENT_ VERI VERI FY
Verifies the consistency of the extent map of the segment.
SEG SEGMENT_ ENT_ CORRUPT CORRUPT
Marks the segment corrupt or valid so that appropriate error recovery can
17
Procedure
Description be done. Cannot be used for a locally managed SYSTEMtablespace.
SEG SEGMENT_ ENT_ DROP ROP_ CORRUPT CORRUPT
Drops a segment currently marked corrupt (without reclaiming space). Cannot be used for a locally managed SYSTEMtablespace.
SEGM SEGMENT_ DUMP TAB TABLESPAC LESPACE_VERI E_VERI FY
Dumps the segment header and extent map of a given segment. Verifies that the bitmaps and extent maps for the segments in the tablespace are in sync.
TAB TABLESPAC LESPACE_REBU E_REBUI LD LD__ BI TMAPS
Rebuilds the appropriate bitmap. Cannot be used for a locally managed
SYSTEMtablespace. TAB TABLESPAC LESPACE_FI E_F I X_ BI TMAPS
Marks the appropriate data block address range (extent) as free or used in bitmap. Cannot be used for a locally managed SYSTEMtablespace.
TAB TABLESPAC LESPACE_REBU E_REBUI LD LD__ QUOTAS TAS
Rebuilds quotas for given tablespace.
TAB TABLESPAC LESPACE_MI GRATE_FROM_ LOC LOCAL Migrates a locally managed tablespace to dictionary-managed tablespace. Cannot be used to migrate a locally managed SYSTEMtablespace to a dictionary-managed SYSTEMtablespace. TAB TABLESPAC LESPACE_MI GRATE_TO_ TE_TO_ LOC LOCAL
Migrates a tablespace from dictionary-managed format to locally managed format.
TAB TABLESPAC LESPACE_RELOC E_RELOCATE_BI TMAPS
Relocates the bitmaps to the destination specified. Cannot be used for a locally managed system tablespace.
TAB TABLESPAC LESPACE_FI E_F I X_ SEG SEGMENT_ ENT_STA STATES TES Fixes the state of the segments in a tablespace in which migration was aborted.
Be careful using the above procedures if not used properly you will corrupt your database. Contact Oracle Support before using these procedures. Following are some of the Scenarios where you can use the above procedures
Scenario 1: Fixing Bitmap When Allocated Blocks are Marked Free (No Overlap) The TABLESPACE_VERIFY procedure discovers that a segment has allocated blocks that are marked free fr ee in the bitmap, but no overlap between segments is reported.
18 In this scenario, perform the following tasks: 1. Call the SEGMENT_DUMP procedure to dump the ranges that the administrator allocated to the segment. 2. For each range, call the TABLESPACE_FIX_BITMAPS procedure with the TABLESPACE_EXTENT_MAKE_USED TABLESP ACE_EXTENT_MAKE_USED option to mark the space as used. 3. Call TABLESPACE_REBUILD_QUOTAS to fix up quotas.
Scenario 2: Dropping a Corrupted Segment You cannot drop a segment because the bitmap has segment blocks marked "free". The system has automatically marked the segment corrupted. In this scenario, perform the following tasks: 1. Call the SEGMENT_VERIFY procedure with the SEGMENT_VERIFY_EXTENTS_GLOBAL S EGMENT_VERIFY_EXTENTS_GLOBAL option. If no overlaps are reported, then proceed with steps 2 through 5. 2. Call the SEGMENT_DUMP procedure to dump the DBA ranges allocated to the segment. 3. For each range, call TABLESPACE_FIX_BITMAPS with the TABLESPACE_EXTENT_MAKE_FREE option to mark the space as free. 4. Call SEGMENT_DROP_CORRUPT to drop the SEG$ entry. 5. Call TABLESPACE_REBUILD_QUOTAS to fix up quotas.
Scenario 3: Fixing Bitmap Where Overlap is Reported The TABLESPACE_VERIFY procedure reports some overlapping. Some of the real data must be sacrificed sacrifi ced based on previous internal errors. After choosing the object to be sacrificed, in this case say, table t1, perform the following tasks: 1. Make a list of all objects that t1 overlaps. 2. Drop table t1. If necessary, follow up by calling the SEGMENT_DROP_CORRUPT procedure. 3. Call the SEGMENT_VERIFY procedure on all objects that t1 overlapped. If necessary, call the TABLESPACE_FIX_BITMAPS procedure to mark appropriate bitmap blocks as used. 4. Rerun the TABLESPACE_VERIFY procedure to verify the problem is resolved.
Scenario 4: Correcting Media Corruption of Bitmap Blocks A set of bitmap blocks has media corruption. In this scenario, perform the following tasks: 1. Call the TABLESPACE_REBUILD_BITMAPS procedure, either on all bitmap blocks, or on a single block if only one is corrupt. 2. Call the TABLESPACE_REBUILD_QUOTAS procedure to rebuild quotas. 3. Call the TABLESPACE_VERIFY procedure to verify that the bitmaps are consistent.
Scenario 5: Migrating from a Dictionary-Managed to a Locally Managed Tablespace To migrate a dictionary-managed tablespace to a locally managed tablespace. You use the TABLESPACE_MIGRATE_TO_LOCAL procedure.
19 For example if you want to migrate a dictionary managed tablespace ICA2 to Locally managed then give the following command.
EXEC EXEC DBMS_SPA S_ SPAC CE_A E_ ADMI N. TABLESPA TABLESPAC CE_M E_ MI GRATE_TO TE_ TO__ LOC LOCAL ( ' i ca2' ) ;
Transporting Tablespaces You can use the transportable tablespaces feature to move a subset of an Oracle Database and "plug" it in to another Oracle Database, essentially moving tablespaces between the databases. The tablespaces being transported can be either dictionary managed or locally managed. Starting with Oracle9 i , the transported tablespaces are not required to be of the same block size as the target database standard block size. Moving data using transportable tablespaces is much faster than performing either an export/import or unload/load of the same data. This is because the datafiles containing all of the actual data are simply copied to the destination location, and you use an import utility to transfer only the metadata of the tablespace objects to the new database. Starting with Oracle Database 10 g, you can transport tablespaces across platforms. This functionality can be used to Allow a database to be migrated from one platform to another. However not all platforms are supported. To see which platforms are supported give the following query.
SQL> SQL> COL COL UMN PL PL ATF ATFORM_ NAME FORMAT A30 SQL SQL > SELECT SEL ECT * FROM FROM V$TRAN V$TRANSPORTAB SPORTABL L E_ PLATFO PL ATFORM RM; PL ATFO ATF ORM_ I D PL ATFO ATF ORM_ NAME
ENDI AN_ AN_ FORMAT
----------- ------------- ------------- ---- ------------- 1 Sol ar i s[ t m] OE ( 32- bi t )
Bi g
2 Sol ar i s[ t m] OE ( 64- bi t )
Bi g
7 Mi cr osof t Wi ndows NT
Li t t l e
10 L i nux I A ( 32- bi t )
Li t t l e
6 AI X- Based Syst ems ( 64- bi t )
Bi g
3 HP- UX ( 64- bi t )
Bi g
5 HP Tru64 Tru64 UNI X
Li t t l e
4 HP- UX I A ( 64- bi t )
Bi g
11 L i nux I A ( 64- bi t )
Li t t l e
15 HP Open pen VMS
Li t t l e
10 r ows s el ect ed. ed. If the source platform and the target platform are of different endianness, then an additional step must be done on either the source or target platform to convert the tablespace being transported to the target format. If they are of
20 the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform. Important: Before a tablespace can be transported to a different platform, the datafile header must identify the
platform to which it belongs. In an Oracle Database with compatibility set to 10.0.0 or higher, you can accomplish this by making the datafile read/write at least once.
SQL> al t er t abl abl espace space i ca r ead onl onl y; Then,
SQL> al t er t abl espa space i ca r ead wr i t e;
Procedure for transporting tablespaces To move or copy a set of tablespaces, perform the following steps. 1. For cross-platform transport, check the endian format of both platforms by querying the V$TRAN V$TRANSPORTAB SPORTABL L E_ PLATF PL ATFO ORMview. If you are transporting the tablespace set to a platform different from the source platform, then determine if the source and target platforms are supported and their endianness. If both platforms pl atforms have the same endianness, no conversion is necessary. Otherwise you must do a conversion of the tablespace set either at the source or target database. Ignore this step if you are transporting your tablespace set to the same platform. 2. Pick a self-contained set of tablespaces. 3. Generate a transportable tablespace set. A transportable tablespace set consists of datafiles for the set of tablespaces being transported and an export file containing structural information for the set of tablespaces. If you are transporting the tablespace set to a platform with different endianness from the source platform, you must convert the tablespace set to the endianness of the target platform. You can perform a source-side conversion at this step in the procedure, or you can perform a target-side conversion as part of step 4. 4. Transport the tablespace set. Copy the datafiles and the export file to the target database. You can do this using any facility for copying flat files (for example, an operating system copy utility, ftp, the DBMS_FI S_ FI L E_C E_ COPY package, or publishing on CDs). If you have transported the tablespace set to a platform with different endianness from the source platform, and you have not performed a source-side conversion to the endianness of the target platform, you should perform a target-side conversion now. 5. Plug in the tablespace. Invoke the Export utility to plug the set of tablespaces into the target database.
21
Transporting Tablespace Example These steps are illustrated more fully in the example that follows, where it is assumed the following datafiles and tablespaces exist: Tablespace
Datafile:
i ca_sa ca_sall es_1
/u01/oracle/oradata/ica_salesdb/ica_sales_101.dbf
i ca_sa ca_sall es_2
/u01/oracle/oradata/ica_salesdb/ica_sales_201.dbf
Step 1: Determine if Platforms are Supported and Endianness Endianness
This step is only necessary if you are transporting the tablespace set to a platform different from the source platform. If i ca_sa ca_sall es_1 and i ca_sa ca_sall es_2 were being transported to a different platform, you can execute the following query on both platforms to determine if the platforms are supported and their endian formats: SELECT SEL ECT d. PLATFO PL ATFORM RM_ NAME, END ENDI AN_ AN_ FORM FORMAT FROM FROM V$ V$TRA TRAN NSPORTA SPORTABLE_ BLE_ PLATFO PL ATFORM RM t p, V$ V$D DATABA ATABASE SE d WHERE t p. PLATFOR PL ATFORM M_ NAME = d. PLATF PL ATFO ORM_ NAME;
The following is the query result from the source platform: PL ATFO ATF ORM_ NAM NAME ENDI AN_ AN_ FORMAT ------------ ------------- ------------ -Sol ar i s[ t m] OE ( 32- bi t ) Bi g
The following is the result from the target platform: PL ATFO ATF ORM_ NAM NAME ENDI AN_ AN_ FORMAT ------------ ------------- ------------ -Mi cr osof osof t Wi ndows NT Li t t l e
You can see that the endian formats are different and thus a conversion is necessary for transporting the tablespace set. Step 2: Pick a Self-Contained Set of Tablespaces Tablespaces
There may be logical or physical dependencies between objects in the transportable set and those outside of the set. You can only transport a set of tablespaces that is self-contained. That is it should not have tables with foreign keys referring to primary key of tables which are in other tablespaces. It should not have tables with some partitions in other tablespaces. To find out whether the tablespace is self contained do the following EXEC EXECU UTE DB DBMS_TTS. S_TTS. TRA TRANSPO SPORT_SET_C T_SET_CH HECK ECK( ' i ca_s al es_ 1, i ca_s al es_2' , TRU TRUE) ;
After executing the above give the following query to see whether any violations are there.
22
SQL SQL > SELECT * FROM TRAN TRANSPORT_SET_ SPORT_SET_ VI OL ATI ONS;
VI OLATI ONS --------------------------------------------------------------------------Const onst r ai nt DEPT_FK EPT_FK bet bet ween een t abl abl e SA SAM MI . EMP i n t abl abl espace espace I CA_SALES _SALES_1 _1 and and t abl abl e SAM SA MI . DEPT i n t abl abl espace OTHER THER Par t i t i oned t abl e SAMI . SALES LES i s pa par t i al l y co cont ai ned i n t he t r anspo spor t abl e set set
These violations must be resolved before i ca_sa ca_sall es_1 and i ca_sa ca_sall es_2 are transportable Step 3: Generate a Transportable Tablespace Tablespace Set After ensuring you have a self-contained set of tablespaces that you want to transport, generate a transportable tablespace set by performing the following actions: Make all tablespaces in the set you are copying read-only.
SQL> SQL> ALTER TABLESPAC TABLESPACE E i c a_ s al es_ es _ 1 REA READ D ONLY; LY ; Tablespace altered.
SQL> SQL> ALTER TABLESPAC TABLESPACE E i c a_ s al es_ es _ 2 REA READ D ONLY; LY ; Tablespace altered. Invoke the Export utility on the host system and specify which tablespaces are in the transportable set.
SQL > HOST $ exp exp syst sys t em/ pass pass wor d FI LE=/ u01/ u01/ or acl e/ expd expdat at . dmp TRA TRANSPORT_TA T_ TAB BLESPAC LESPACES = i ca_ s al es_ 1, i ca_ s al es_ 2 ca_sall es_1 and i ca_sa ca_sall es_2 are being transported to a different platform, and the endianness of the If i ca_sa platforms is different, differen t, and if you want to convert before befo re transporting the tablespace ta blespace set, then convert co nvert the datafiles composing the i ca_sa ca_sall es_1 and i ca_sa ca_sall es_2 tablespaces. You have to use RMAN utility to convert datafiles
$ RMAN TARGET / Recove ecoverr y Manag anager er : Rel ease ease 10 10. 1. 0. 0. 0 Copyr i ght ( c) 19 1995, 2003, Or acl e Cor por at i on.
Al l r i ght s r eser ser ved.
conne connect ct ed t o t ar get get dat dat abase: abase: i ca_s al esdb ( DBI D=3295 329573 7315 1590 90))
23 Convert the datafiles into a temporary location on the source platform. In this example, assume that the temporary location, directory /temp, has already been created. The converted datafiles are assigned names by the system.
RMAN> CONVERT ERT TAB TABLESPAC LESPACE i ca_ s al es_ 1, i ca_ s al es_ 2 TO PLATFO PLATFOR RM ' Mi cr osof t Wi ndow ndows NT' FOR FORMAT ' / t emp/ %U' ;
St ar t i ng back backu up at at 08- APRPR- 03 usi ng t ar get dat abase con cont r ol f i l e i nst ead of r ecov cover y ca cat al og al l ocat ocat ed chan channel : ORA_DI _DI SK_1 channe channell ORA_ DI SK_ SK_ 1: si d=11 devt devt ype= ype=DI SK cha channel ORA_DI _DI SK_1: st ar t i ng dat af i l e con conver ver si on i nput dat af i l e f no=00005 name=/ u01/ or acl e/ or adat a/ i ca_sal ca_sal esdb sdb/ i ca_sal ca_sal es_101 s_101. dbf conve converr t ed dat dat af i l e=/ t emp/ dat dat a_Da_D- 10_ 10_ I - 3295 329573 7315 1590 90__ TSTS- ADMI N_ TBS_FN TBS_FNO O- 5_05ek2 5_05ek24v 4v5 5 chan chann nel ORA_DI _DI SK_1: dat af i l e conve converr si on compl et e, el apsed apsed t i me: 00: 00: 15 cha channel ORA_DI _DI SK_1: st ar t i ng dat af i l e con conver ver si on i nput dat af i l e f no=00004 name=/ u01/ or acl e/ or adat a/ i ca_sal ca_sal esdb sdb/ i ca_sal ca_sal es_101 s_101. dbf conve converr t ed dat dat af i l e=/ t emp/ dat dat a_Da_D- 10_ 10_ I - 3295 329573 7315 1590 90__ TSTS- EXA EXAMPLE_ PLE_ FNO FNO- 4_06ek2 4_06ek24v 4vll chan chann nel ORA_DI _DI SK_1: dat af i l e conve converr si on compl et e, el apsed apsed t i me: 00: 00: 45 Fi ni shed shed backup ackup at 08- APRPR- 07 Step 4: Transport the Tablespace Set
Transport both the datafiles and the export file of the tablespaces to a place accessible to the target database. You can use any facility for copying flat files (for example, an operating system copy utility, ftp, the DBMS_FI S_ FI LE_ TRAN TRANSFER package, or publishing on CDs). Step 5: Plug In the Tablespace Set Plug in the tablespaces and integrate the structural information using the Import utility, i mp:
I MP s yst em/ pass pass wor d FI LE=expda expdatt . dmp DATAFI LES LES=/ i ca_sal ca_sal esdb sdb/ i ca_sal ca_sal es_10 s_101. dbf , / i ca_sal ca_sal esdb sdb/ i ca_sal ca_sal es_20 s_201. dbf REMAP_SC P_ SCH HEMA=( s mi t h: s ami ) REMAP_SC P_ SCH HEMA=( wi l l i ams: j ohn) ohn) The REMAP REMAP__ SCHEMA SCHEMA parameter changes the ownership of database objects. If you do not specify REMAP_ AP_ SCHEMA, all database objects (such as tables and indexes) are created in the same user schema as in the source database, and those users must already exist in the target database. If they do not exist, then the import utility returns an error. In this example, objects in the tablespace set owned o wned by smi t h in the source database will be owned by s ami in the target database after the tablespace set is plugged in. Similarly, objects owned by wi l l i ams in the source database will be owned by j ohn in the target database. In this case, the target database is not required to have users smi t h and wi l l i ams , but must have users s ami and j ohn.
After this statement executes successfully, all tablespaces in the set being copied remain in read-only mode. Check the import logs to ensure that no error has occurred. Now, put the tablespaces into read/write mode as follows:
24
ALTER TAB TABLESPACE LESPACE i ca_ s al es_ es _ 1 REA READ D WRI TE; ALTER TAB TABLESPACE LESPACE i ca_ s al es_ es _ 2 REA READ D WRI TE;
Viewing Information about Tablespaces and Datafiles Oracle has provided many Data dictionaries to to view information about tablespaces and datafiles. Some of them are: To view information about Tablespaces in a database give the following query
SQL>sel ect * f r om dba_t abl abl espace spaces s SQL>sel ect * f r om v$t v$t abl abl espace space;; To view information about Datafiles
SQL>sel sel ect * f r om dba_da _dat a_f i l es; SQL>sel sel ect * f r om v$dat af i l e; To view information about Tempfiles
SQL>sel sel ect * f r om dba_t emp_f i l es; SQL>sel sel ect * f r om v$t v$t empf i l e; To view information about free space in datafiles
SQL>sel ect * f r om dba_f r ee_s pace; ace; To view information about free space in tempfiles
SQL>s L>s el ect * f r om V$TEMP_SPA P_ SPAC CE_H E_ HEAD EADER; ER;
Relocating or Renaming Datafiles You can rename datafiles to either change their names or relocate them.
Renaming or Relocating Datafiles belonging to a Single Tablespace To rename or relocate datafiles belonging to a Single Tablespace do the following. 1.
Take the tablespace offline
2.
Rename or Relocate the datafiles using operating o perating system command
3.
Give the ALTER TABLESPACE with RENAME DATAFILE option to change the filenames within the Database.
4.
Bring the tablespace Online
For Example suppose you have a tablespace users with the following datafiles
25 /u01/oracle/ica/usr01.dbf’ /u01/oracle/ica/usr02.dbf’ Now you want to relocate / u01/ or acl e/ i ca/ ca/ usr 01. dbf ’ to ‘ / u02/ or ac l e/ i c a/ a/ us r 01 01. dbf ’ and want to rename ‘ / u01/ or ac l e/ i c a/ a/ us r 02 02. dbf ’ to ‘ / u01/ or acl e/ i ca/ ca/ user ser s02 s02. dbf ’ then follow the given the steps
1.
Bring the tablespace offline
SQL> al t er t abl espa space user ser s of f l i ne; 2.
Copy the file to new location using o/s command.
$cp / u01/ or acl e/ i ca/ ca/ usr 01. dbf
/ u02/ or acl e/ i ca/ ca/ usr 01. dbf ’
Rename the file ‘ / u01/ or ac l e/ i c a/ a/ us r 02 02. dbf ’ to ‘ / u01/ or acl e/ i ca/ ca/ user ser s02 s02. dbf ’ using o/s command.
$mv 3.
/ u01/ or acl e/ i ca/ ca/ usr 02. dbf / u01/ or acl e/ i ca/ ca/ user ser s02 s02. dbf
Now start SQLPLUS and type the following command to rename and relocate these files
SQL> al t er t abl abl espace space user user s r ename f i l e ‘ / u01/ or ac l e/ i c a/ a/ us r 01 01. dbf ’ , ‘ / u01/ or ac l e/ i c a/ a/ us r 02 02. dbf ’ to ‘ / u02/ or ac l e/ i c a/ a/ us r 01 01. dbf ’ , ’ / u01/ or ac l e/ i c a/ a/ us er er s 02 02. dbf ’ ; 4.
Now bring the tablespace Online
SQL> al t er t abl abl espace space user s onl onl i ne;
Procedure for Renaming and Relocating Datafiles in Multiple Tablespaces You can rename and relocate datafiles in one or more tablespaces using the ALTER ALT ER DATAB DATABASE ASE RENAM RENAME FI FI L E statement. This method is the only choice if you want to rename or relocate datafiles of several tablespaces in one operation. You must have the ALTER DATABASE system privilege To rename datafiles in multiple tablespaces, follow these steps. 1. Ensure that the database is mounted but closed. 2. Copy the datafiles to be renamed to their new locations and new names, using the operating system.. 3. Use ALTER DATABASE to rename the file pointers in the database control file. sor t 01. dbf and For example, the following statement renames the datafiles/ u02/ or acl e/ r bdb1/ sor / u02/ or acl e/ r bdb1/ user ser 3. dbf to / u02/ or acl e/ r bdb1/ t emp01. dbf and / u02/ or acl e/ r bdb1/ user ser s03 s03. dbf , respectively: ALTER DATABASE RENAME FI LE ' / u02/ or acl e/ r bdb1/ sor sor t 01. dbf ' , ' / u02/ or acl e/ r bdb1/ user ser 3. dbf '
26
TO ' / u02/ u02/ or acl e/ r bdb1/ bdb1/ t emp01. p01. dbf ' , ' / u02/ or acl e/ r bdb1/ user ser s03 s03. dbf ;
Always provide complete filenames (including their paths) to properly identify the old and new datafiles. In particular, specify the old datafile names exactly as they appear in the DBA_ DATA_ TA_ FI LES view. 4. Back up the database. After making any structural changes to a database, always perform an immediate and complete backup. 5. Start the Database
27
Managing REDO LOGFILES Every Oracle database must have at least 2 redo logfile groups. Oracle writes all statements except, SELECT statement, to the logfiles. This is done because Oracle performs deferred batch writes i.e. i .e. it does write changes to disk per statement instead it performs write in batches. So in this case if a user updates a row, Oracle will change the row in db_buffer_cache and records the the statement in the logfile and give the the message to the user that that row is updated. Actually the row is not yet written back to the datafile but still it give the message to the user that row is updated. After 3 seconds the row is actually written to the datafile. This is known as deferred batch writes. Since Oracle defers writing to the datafile there is chance of power failure or system crash before the row is written to the disk. That’s why Oracle writes the statement in redo logfile so that in case of power failure or system crash oracle can re-execute the statements next time when you open the database.
Adding a New Redo Logfile Group To add a new Redo Logfile group to the database give the following command
SQL>al t er dat abase abase add add l ogf ogf i l e gr gr oup oup 3 ‘ / u01/ or acl e/ i ca/ ca/ l og3. or a’ si ze 10M; Note: You can add groups to a database up to the MAXLOGFILES setting you have specified at the time of creating the database. If you want to change MAXLOGFILE setting you have to create a new controlfile.
Adding Members to an existing group To add new member to an existing group give the following command
SQL>al t er dat dat abase abase add add l ogf ogf i l e member ber ‘ / u01/ or acl e/ i ca/ ca/ l og11. or a’ t o gr oup 1; Note: You can add members to a group up to the MAXLOGMEMBERS setting you have specified at the time of creating the database. If you want to change MAXLOGMEMBERS setting you have create a new controlfile Important: Is it strongly recommended that you multiplex logfiles i.e. have at least two log members, one member in one disk and another in second disk, in a database.
Dropping Members from a group You can drop member from a log group only if the group is having more than one member and if it is not the current group. If you want to drop members from the current group, force a log switch or wait so that log switch occurs and another group becomes current. To force a log switch give the following command
SQL>al t er syst syst em swi t ch l ogf i l e; The following command can be used to drop a logfile member
SQL>al t er dat abase dr op l ogf i l e member ‘ / u01/ or acl e/ i ca/ ca/ l og11. or a’ ;
28 Note: When you drop logfiles the files are not deleted from the disk. You have to use O/S command to delete the files from disk.
Dropping Logfile Group Similarly, you can also drop logfile group only if the database is having more than two groups and if it is not the current group.
SQL>al t er dat abase abase dr op l ogf ogf i l e gr oup oup 3; Note: When you drop logfiles the files are not deleted from the disk. You have to use O/S command to delete the files from disk.
Resizing Logfiles You cannot resize logfiles. If you want to resize a logfile create a new logfile group with the new size and subsequently drop the old logfile group.
Renaming or Relocating Logfiles To Rename or Relocate Logfiles perform the following steps For Example, suppose you want to move a logfile from ‘ / u01/ or ac l e/ i c a/ l og1. or a’ to
‘ / u02/ or ac l e/ i c a/ a/ l og1. or a’ , then do the following Steps
1. Shutdown the database
SQL SQL >s hut down i mmedi at e;
2. Move the logfile from Old location to new location using operating system command
$mv / u01/ or acl e/ i ca/ ca/ l og1. or a
/ u02/ or acl e/ i ca/ ca/ l og1. or a
3. Start and mount the database
SQL SQL >s t ar t up mount
4. Now give the following command to change the location locati on in controlfile
SQL>al t er dat abase r ename f i l e ‘ / u01/ or acl e/ i ca/ ca/ l og1. or a’ t o ‘ / u02/ or ac l e/ i c a/ l og2. or a’ ; 5. Open the database
29
SQL>al t er dat dat abase abase open open;; Clearing REDO LOGFILES A redo log file might become corrupted while the database is open, and ultimately stop database activity because archiving cannot continue. In this situation the ALTER DATAB DATABASE ASE CLEAR LOG LOGFI L E statement can be used reinitialize the file without shutting down the database. The following statement clears the log files in redo log group number 3:
ALTER ALT ER DATAB DATABASE ASE CLEAR CLEAR LOG LOGFI L E GRO GROUP 3; This statement overcomes two situations where dropping redo logs is not possible: • •
If there are only two log groups The corrupt redo log file belongs to the current group
If the corrupt redo log file has not been archived, use the UNARCH ARCHII VED keyword in the statement.
ALTER ALT ER DATAB DATABASE ASE CLEAR CL EAR UN UNARCH ARCHII VED L OGFI L E GRO GROUP 3; This statement clears the corrupted redo logs and avoids archiving them. The cleared redo logs are available for use even though they were not archived. If you clear a log file that is needed for recovery of a backup, then you can no longer recover from that backup. The database writes a message in the alert log describing the backups from which you cannot recover
Viewing Information About Logfiles To See how many logfile groups are there and their status type the following query.
SQL SQL >SELECT SEL ECT * FROM V$LOG V$LOG; GROUP# THREAD THREAD# # SEQ BYTES BYTE S ------ ------- ----- ------1 1 20605 20605 1048576 1048576 2 1 20606 1048576 3 1 20603 20603 104857 1048576 6 4 1 20604 20604 104857 1048576 6
MEMBERS ------1 1 1 1
ARC --YES NO YES YES
STATUS STATUS --------ACTI ACTI VE CURR CURREN ENT T I NACTI VE I NACTI VE
FI RST_ CHAN CHANG GE# FI RST_ TI M ------------ - --------61515628 61515628 21- J UN- 07 41517595 21- J UN- 07 315116 31511666 66 21- J UN- 07 21513 21513647 647 21- J UN- 07
To See how many members are there and where they are located give the following query
SQL> SQL>SELECT SELECT * FROM V$ V$LO LOG GFI LE; GROUP# -----1 2
STATUS ATUS -------
MEM EMBE BER R ------------ ------------- --------/ U01/ 01/ ORACLE/ I CA/ LOG LOG1. ORA / U01/ 01/ ORACLE/ I CA/ LOG LOG2. ORA
30
Managing Control Files Every Oracle Database has a control file, which is a small binary file that records the physical structure of the database. The control file includes: •
•
The database name Names and locations of associated asso ciated datafiles and redo r edo log files
•
The timestamp of the database creation
•
The current log sequence number
•
Checkpoint information
It is strongly recommended that that you multiplex control files i.e. Have at least two control files one in one hard disk and another one located in another disk, in a database. In this way if control file becomes corrupt in one disk disk the another copy will be available and you don’t have to do recovery of control file. You can multiplex control file at the time of creating a database and later on also. If you have not multiplexed multiplexed control file at the time of creating a database you can do it now by following given procedure.
Multiplexing Control File Steps:
1. Shutdown the Database.
SQL >SHUT HUTDOWN I MMEDI ATE; ATE;
2. Copy the control file from old location to new location using operating system command. For example.
$cp / u01/ or acl e/ i ca/ ca/ con cont r ol . or a
/ u02/ or acl e/ i ca/ ca/ con cont r ol . or a
3. Now open the parameter file and an d specify the new location locatio n like this
CONTROL_FI LE LES S=/ u01/ or acl e/ i ca/ ca/ con cont r ol . or a Change it to
CONTROL_FI L_FI LE LES S=/ u01/ or acl e/ i ca/ ca/ con cont r ol . or a, / u02/ or acl e/ i ca/ ca/ con cont r ol . or a
4. Start the Database
31 Now Oracle will start updating both the control files and, if one control file is lost you can copy it from another location.
Changing the Name of a Database If you ever want to change the name of database or want to change the setting of MAX AXD DATAFI L ES,
MAX AXL L OGFI L ES, MAX AXL L OGMEMBE BERS RS then you have to create a new control file.
Creating A New Control File Follow the given steps to create a new controlfile Steps
1. First generate the create controlfile statement
SQL>al t er dat abase backup ckup con cont r ol f i l e t o t r ace; ce; After giving this statement oracle will write the CREATE CONTROL FI L E statement in a trace file. The trace file will be randomly named something like li ke ORA23212 TRC .TRC and it is created in USER _DUMP _DEST directory.
DUMP _ DEST DEST directory and open the latest trace file in text editor. This file will contain the 2. Go to the USER _ DUMP CREATE CONTROL FI L E statement. It will have two sets of statement one with RESETLOGS and another without RESETLOGS. Since we are changing the name of the Database we have to use RESETLOGS option sql of CREATE CONTROL FI L E statement. Now copy and paste the statement in a file. Let it be c. sql
3. Now open the c.sql file in text te xt editor and set the database data base name from ica to prod shown in an a n example below CREATE CREATE CON CONTROL FI L E SET DATA DATABA BASE SE pr od LOGFI LE GROUP 1 ( ' / u01/ or acl e/ i ca/ ca/ r edo01_01 _01. l og' , ' / u01/ or ac l e/ i c a/ a/ r edo01_ 02 02. l og' ) , GROUP 2 ( ' / u01/ or acl e/ i ca/ ca/ r edo02_01 _01. l og' , ' / u01/ or ac l e/ i c a/ a/ r edo02_ 02 02. l og' ) , GROUP 3 ( ' / u01/ or acl e/ i ca/ ca/ r edo03_01 _01. l og' , ' / u01/ or ac l e/ i c a/ a/ r edo03_ 02 02. l og' ) RESETLOGS DATAFI TAFI LE ' / u01/ oracl e/ i ca/ ca/ syst em01. dbf ' SI ZE 3M, ' / u01/ or acl e/ i ca/ ca/ r bs01 s01. dbs' SI ZE 5M, ' / u01/ or acl e/ i ca/ ca/ user ser s01 s01. dbs' SI ZE 5M,
32
' / u01/ or acl e/ i ca/ ca/ t emp01. dbs' SI ZE 5M 5M MAX AXL L OGFI L ES 50 MAXL OGMEMBERS BERS 3 MAXLOGHI STO STORY 400 MAX AXD DATAFI L ES 200 MAXI NSTAN STANCES CES 6 ARCH ARCHII VELO VELOG;
4. Start and do not mount the database.
SQL >STARTUP ARTUP NOMOUNT UNT;
5. Now execute c.sql script
SQL> @/ u01/ or acl e/ c. sql
6. Now open the database with wit h RESETLOGS
SQL>ALTER DATABASE OPEN RESETLOGS; Cloning an Oracle Database. You have a Production database running in one server. The company management wants to develop some new modules and they have hired some programmers to do that. Now these programmers require access to the Production database and they want to make changes to it. You as a DBA can’t give direct access to Production database so you want to create a copy of this database on another server and wants to give developers access to it. Let us see an example of cloning a database We have a database running the production server with the following files
PARAMETER FI LE l ocat cat ed i n / u01/ or acl e/ i ca/ ca/ i ni t i ca. ca. or a CONTROL FI FI LE LES S=/ u01/ or acl e/ i ca/ ca/ con cont r ol . or a BACKGROUND_ DUMP_ DEST=/ u01/ or acl ac l e/ i c a/ bdump USER SER_ DUMP_ P_D DEST= EST=/ u01/ u01/ or acl e/ i ca/ udum udump CORE_DU E_DUMP_ P_D DEST= EST=/ u01/ u01/ or acl e/ i ca/ cdump LOG LOG_AR _ARCHI VE_ DEST_1=”l ocat ocat i on= on=/ u01/ or acl e/ i ca/ ar c1” DATAFI TAFI LES = / u01/ or acl e/ i ca/ ca/ sys. sys. dbf / u01/ or ac l e/ i c a/ a/ us r . dbf
33
/ u01/ or ac l e/ i / u01/ or acl e/ i / u01/ or acl e/ i LOG LOGFI LE= / u01/ or acl e/ i / u01/ or ac l e/ i
c a/ a/ r bs . dbf ca/ ca/ t mp. dbf ca/ ca/ sysa sysau ux. dbf ca/ ca/ l og1. or a c a/ a/ l og2. or a
u01 filesystem. In Now you want to copy this database data base to SERVER 2 and in SERVER 2 you don’t have /u01 SERVER 2 you have /d01 d01 filesystem. To Clone this Database on SERVER 2 do the following. Steps :-
1. In SERVER 2 install the same version of o/s and same version Oracle as in SERVER 1.
2. In SERVER 1 generate CREATE CONTROL FI L E statement by typing the following command
SQL>al t er dat abase backup ckup con cont r ol f i l e t o t r ace; ce;
Now, go to the USER _ DUMP DUMP _ DEST DEST directory and open the latest trace file. This file will contain steps and as well as CREATE CONTROL FI L E statement. Copy the CREATE CONTROL FI L E statement and paste in a file. Let the filename be cr .s ql
The CREATE CONTROL FI L E Statement will look like this. CREATE CREATE CON CONTROL FI L E SET DATA DATABA BASE SE pr od LOGFI LE GROUP 1 ( ' / u01/ or acl e/ i ca/ ca/ l og1. or a' GROUP 2 ( ' / u01/ or acl e/ i ca/ ca/ l og2. or a' DATAFI LE ' / u01/ or acl e/ i ca/ ca/ sys. dbf ' SI ZE 300M, ' / u01/ or acl e/ i ca/ ca/ r bs. dbf ' SI ZE 50M, ' / u01/ or acl e/ i ca/ ca/ usr . dbf ' SI ZE 50M, ' / u01/ or acl e/ i ca/ ca/ t mp. dbf ' SI ZE 50 50M, ‘ / u01/ or acl e/ i ca/ ca/ sysa sysau ux. dbf ’ si ze 100M; MAX AXL L OGFI L ES 50 MAXL OGMEMBERS BERS 3 MAXLOGHI STO STORY 400
34
MAX AXD DATAFI L ES 200 MAXI NSTAN STANCES CES 6 ARCH ARCHII VELO VELOG;
3. In SERVER 2 create the following directories
$cd / d01/ or acl e $mkdi kdi r i ca $mkdi kdi r ar c1 $cd i ca $mkdi r bdum bdump udum udump cdum c dump
Shutdown the database on SERVER 1 and transfer all datafiles, logfiles and control file to SERVER 2 in d01/or acl e/i ca directory. /d01
Copy parameter file to SERVER 2 in /d01 d01/or acl e/dbs dbs directory and copy all archive log files to SERVER 2 in / d01 d01/or ac l e/i ca/arc1 directory. Copy the c r . s ql ql script file to /d01 d01/or acl e/i ca directory.
4. Open the parameter file SERVER 2 and change the following parameters
CONTROL FI LE LES S=/ / d01/ or acl e/ i ca/ ca/ con cont r ol . or a BACKGROUND_ DUMP_ P_D DEST= EST=/ / d01/ d01/ or acl e/ i c a/ bdum bdump USER SER_ DUMP_ P_D DEST= EST=/ / d01/ d01/ or acl e/ i ca/ udum udump CORE_DU E_DUMP_ P_D DEST= EST=/ / d01/ d01/ or acl e/ i ca/ cdum cdump LOG LOG_AR _ARCHI VE_DE _DEST_1= T_1=”l ocat i on= on=/ / d01/ or acl e/ i ca/ ar c1” c1”
ql file in text editor and change the locations like this 5. Now, open the c r . s ql CREATE CREATE CON CONTROL FI L E SET DATA DATABA BASE SE pr od LOGFI LE GROUP 1 ( ' / / d01/ or acl e/ i ca/ ca/ l og1. or a' GROUP 2 ( ' / / d01/ or acl e/ i ca/ ca/ l og2. or a'
35
DATAFI LE ' / / d01/ or acl e/ i ca/ ca/ sys. dbf ' SI ZE 300M, ' / / d01/ or acl e/ i ca/ ca/ r bs. dbf ' SI ZE 50M, ' / / d01/ or acl e/ i ca/ ca/ usr. dbf ' SI ZE 50M, ' / / d01/ or acl e/ i ca/ ca/ t mp. dbf ' SI ZE 50M, ‘ / / d01/ or acl e/ i ca/ ca/ sysa sysau ux. dbf ’ si ze 100M; MAX AXL L OGFI L ES 50 MAXL OGMEMBERS BERS 3 MAXLOGHI STO STORY 400 MAX AXD DATAFI L ES 200 MAXI NSTAN STANCES CES 6 ARCH ARCHII VELO VELOG; In SERVER 2 export ORACLE RACLE _SI D environment variable and start the instance
$expo $exporr t ORACLE_SI D=i ca $sql sql pl us Ent er User ser : / as sysdb sysdba SQL> s t ar t up nomount ; 6. Run cr.sql script to create the controlfile
SQL>@/ d01/ or acl e/ i ca/ ca/ cr . sql sql 7. Open the database
SQL>al t er dat dat abase abase open open;;
36
Managing the UNDO TABLESPACE Every Oracle Database must have a method of maintaining information that is used to roll back, or undo, changes to the database. Such information consists of records of the actions of transactions, primarily before they are committed. These records are collectively referred to as undo . Undo records are used to: Roll back transactions when a ROLLBACK statement statement is issued Recover the database Provide read consistency Analyze data as of an earlier point in time by using Flashback Query Recover from logical corruptions using Flashback features
• • • • •
Earlier releases of Oracle Database used rollback segments to store undo. Oracle9i introduced automatic undo management, which simplifies undo space management by eliminating the complexities associated with rollback segment management. Oracle strongly recommends that you use undo tablespace to manage undo rather than rollback segments.
Switching to Automatic Management of Undo Space To go for automatic management of undo space set the following parameter. Steps:1. If you have not created an undo tablespace at the time of creating a database then, create an undo tablespace by typing the following command
SQL>cr eat eat e un undo t abl abl espace espace myund yundo o dat dat af i l e ‘ / u01/ or acl e/ i ca/ ca/ undo_t bs. dbf ’ si ze 500M aut oext end ON next 5M ; When the system is first running in the production environment, you may be unsure of the space requirements of the undo tablespace. In this case, you can enable automatic extension for datafiles of the undo tablespace so that they automatically increase in size when more space is needed 2. Shutdown the Database and set the following parameters in parameter file.
UNDO_ UNDO_M MANAGEM ANAGEMENT= NT=AUTO AUTO UND UNDO_ TABL ABLES ESP PACE=myundo undo 3. Start the Database.
Now Oracle Database will use Automatic Undo Space Management.
Calculating the Space Requirements For Undo Retention You can calculate space requirements manually using the following formula: UndoSpac e = UR * UPS + over ove r head
37
where: • •
• •
UndoSpace is the number of undo blocks UR is UNDO_ RETENTI ON in seconds. This value should take into consideration long-running queries and any flashback requirements. UPS is undo blocks for each second overhead is the small overhead for metadata (transaction tables, bitmaps, and so forth)
RETENTI ON is set to 3 hours, and the transaction rate (UPS) is 100 undo blocks for As an example, if UNDO_ RETEN each second, with a 8K block size, the required undo space is computed as follows:
(3 * 3600 * 100 * 8K) = 8.24GBs To get the values for UPS, Overhead query the V$UNDOSTAT view. By giving the following statement
SQL> Sel ect * f r om V$U $UN NDOSTAT; STAT; Altering UNDO Tablespace If the Undo tablespace is full, you can resize existing datafiles or add new datafiles to it The following example extends an existing datafile SQL> alter database datafile ‘ / u01/ or acl e/ i ca/ ca/ undo_t bs. dbf ’ r esi ze 700M The f ol l owi ng exampl e adds adds a new dat af i l e t o undo undo t abl espace
SQL SQL > ALTER ALTER TABLESP TABL ESPACE ACE myundo yu ndo ADD DATAFI TAFI LE ' / u01/ u01/ or acl e/ i ca/ undo undo02 02.. dbf dbf ' SI ZE 200 200M AUTOEXTE EXTEN ND ON NEXT EXT 1M MAX AXSI SI ZE UNLI MI TED; TED;
Dropping an Undo Tablespace Use the DROP TABLESPACE statement to drop an undo tablespace. The following example drops the undo undott bs_ 01: tablespace undo SQL> DROP ROP TABL TABLES ESPACE PACE myundo undo;;
An undo tablespace can only be dropped if it is not currently used by any instance. If the undo tablespace contains any outstanding transactions (for example, a transaction died but has not yet been recovered), the DROP TABLESPACE statement fails.
Switching Undo Tablespaces You can switch from using one undo tablespace to another. Because the UNDO_ TABLESP TABL ESPACE ACE initialization SYSTEM M SET statement can be used to assign a new undo parameter is a dynamic parameter, par ameter, the ALTER SYSTE tablespace.
38
The following statement switches to a new undo tablespace: ALTER ALT ER SYSTEM SET UND UNDO_ TABLESPACE TABLESPACE = myundo2 yundo2;;
Assuming myundo undo is the current undo tablespace, after this command successfully executes, the instance undo2 2 in place of myundo undo as its undo tablespace. uses myundo
Viewing Information about Undo Tablespace To view statistics for tuning undo tablespace query the following dictionary
SQL>sel ect * f r om v$u v$undost at ; To see how many active Transactions are there and to see undo segment information give the following command
SQL>sel sel ect * f r om v$t v$t r ansact sact i on; To see the sizes of extents in the undo tablespace give the following query
SQL>s L>s el ect * f r om DBA_ UNDO_ EXTEN EXTENTS; TS;
39
SQL Loader SQL LOADER utility is used to load data from other data source into Oracle. For example, if you have a table in FOXPRO, ACCESS or SYBASE or any other third party database, you can use SQL Loader to load the data into Oracle Tables. SQL Loader will only read the data from Flat files. So If you want to load the data from Foxpro or any other database, you have to first convert that data into Delimited Format flat file or Fixed length format flat file, and then use SQL loader to load the data into Oracle. Following is procedure to load the data from Third Party Database into Oracle using SQL Loader. 1. 2. 3. 4.
Convert the Data into Flat file using third party database command. Create the Table Structure in Oracle Database using appropriate datatypes Write a Control File, describing how to interpret the flat file and options to load the data. Execute SQL Loader utility specifying the control file in the command line argument
To understand it better let us see the following case study.
CASE STUDY (Loading Data from MS-ACCESS to Oracle) Suppose you have a table in MS-ACCESS by name EMP, running under Windows O/S, with the following structure
EMPNO NAME SAL J DATE
I NTEG EGER ER TEXT( TEXT( 50) CURRENCY DATE
This table contains some 10,000 rows. Now you want to load the data from this table into an Oracle Table. Oracle Database is running in LI NUX O/S.
Solution Steps Start MS-Access and convert the table into comma delimited flat (popularly known as csv) , by clicking on F i l e/Save As menu. Let the delimited file name be emp.cs v 1. Now transfer this file to Linux Server Ser ver using FTP command a. Go to Command Prompt in windows b. At the command prompt type FTP followed by IP address of the server running Oracle. FTP will then prompt you for username and password to connect to the Linux Server. Supply a valid username and password of Oracle User in Linux For example:-
C: \ >f t p 200. 200. 100. 111 Name: or acl e Passwo sswor d: or acl e FTP>
40
c. Now give PU PUT T command to transfer file from current Windows machine to Linux machine.
FTP>put FTP>put Loca Locall f i l e: C: \ >emp. csv r emot e- f i l e: / u01/ or acl e/ emp. csv Fi l e t r ansf er r ed i n 0. 29 Secon conds FTP> d. Now after the file is transferred transferr ed quit the FTP utility by typing bye command. FTP>bye Good- Bye 2. Now come the Linux Machine and create cre ate a table in Oracle with the same structure str ucture as in MSACCESS by taking appropriate datatypes. For example, example, create a table like this
$s ql ql pl us s c ot ot t / t i ger SQL SQL >CREATE CREATE TABLE TABLE emp ( empno num number ( 5) , nam name var var char 2( 50) 50) , sal number ( 10, 2) , j dat e dat e) ; 3. After creating the table, you have to write a control file describing the actions which SQL Loader should do. You can use any text editor to write the control file. Now let us write a controlfile for our case study
$vi emp. ct l 1 L OAD DATA ATA 2 I NFI LE ‘ / u01/ or acl e/ emp. csv’ csv’ 3 BADFI LE ‘ / u01/ or acl e/ emp. bad’ 4 DI SCARDFI LE ‘ / u01/ or acl e/ emp. dsc’ 5 I NSERT I NTO TABL TABLE emp 6 FI ELDS ELDS TERM TERMI NATED BY “, “ , ” OPTI ONALL LLY Y ENC ENCLOSED LOSED BY ‘ ” ’ TRA TRAI LI NG NULLCOLS 7 ( empno, name, sal , j dat e dat e ‘ mm/ dd/ yyyy’ yyyy’ ) Notes: (Do not write the line numbers, they are meant for explanation purpose) 1.
The LOADDATA LOAD DATA statement is required at the beginning of the control file.
2.
The I
3.
Specifying BADFI
4.
5.
NFI L E option specifies where the input file is located LE is optional. If you specify,
Specifying DI SCA SCARDFI be written to this file.
then bad records found during loading will be stored stored in this file.
LE is optional. If you specify, then records which do not meet a WHEN HEN condition will
You can use any of the following loading option 1.
I NSERT SERT : Loads rows only if the target table is empty
2.
APPEND: Load rows if the target table is empty or not.
3.
REPLACE: First deletes all the rows in the existing table and then, load rows.
4.
TRU TRUNCATE: First truncates the table and then load rows.
41 6.
This line indicates how the fields are separated in input file. Since in our case the fields are separated by “,” so we have specified “,” as the terminating char for fields. You can replace this by any char which is used to terminate fields. Some of the popularly use terminating characters are semicolon “ ; ” , colon “ : ” , pipe “ | ” etc. TRA TRAI LI NGNU GNULL LLC COLS means if the last column is null then treat this as null value, otherwise, SQL L OAD ADER ER will treat the record as bad if the last column is null.
7.
In this line specify the columns of the target table. Note how do you specify format for Date columns
4. After you have wrote the control file save it and then, call SQL Loader utility by typing the following command
$sql sql l dr user ser i d=scot scot t / t i ger con cont r ol =emp. ct l l og=emp. l og After you have executed the above command SQL Loader will shows you the output describing how many rows it has loaded.
LOG option of s ql ql l dr specifies where the log file of this sql loader session should be created. The LOG The log file contains all actions which SQL l oader ader has performed i.e. how many rows were loaded, how many were rejected and how much time is taken to load the rows and etc. You have to view this file for any errors encountered while running SQL Loader . CASE STUDY (Loading Data from Fixed Length file into Oracle) Suppose we have a fixed length format file containing employees data, as shown below, and wants to load this data into an Oracle table. 7782 CLARK
MANAG ANAGER
7839 KI NG
PRESI DENT ENT
7934 MI LL ER
CLERK
7566 J ONES
MANAG ANAGER
7839
2572. 50
10
5500. 5500. 00
10
7782
920. 00
10
7839
3123. 75
20
7499 ALL ALLEN EN
SALESMAN SALESMAN 7698
1600. 00
300. 00 30
7654 MARTI ARTI N
SALESMAN SALESMAN 7698
1312. 50
1400. 00 30
7658 CHAN CHAN
ANALY ANALYST ST
7566
3450. 00
20
7654 MARTI ARTI N
SALESMAN SALESMAN 7698
1312. 50
1400. 00 30
SOLUTION: Steps :1. First Open the file in a text editor and count the length of fields, for example in our fixed length file, employee number is from 1st position to 4th position, employee name is from 6th position to 15th position, Job name is from 17th position to 25th position. Similarly other columns are also located. 2. Create a table in Oracle, by any name, but should match columns specified in fixed length file. In our case give the following command to create the table.
42
SQL SQL > CREATE CREATE TABLE TABLE emp ( empno NUMBER( 5) , nam name VARCH VARCHAR AR2( 2( 20) , j ob VARCHAR2( 10) 10) , mgr NUMBER( BER( 5) , s al NUMBER( ER( 10, 10, 2) , c omm NUMBER( ER( 10, 2) , dept no NUMBER( ER( 3) ) ; 3. After creating the table, now write a control file by using any text editor
$vi empf i x. ct l L OAD DATA ATA 1) I NFI L E ' / u01/ or ac l e/ f i x. dat ' 2) I NTO TABLE TABL E emp 3) ( empno POSI POSI TI ON( 01: 04) 4) name POSI POSI TI ON( 06: 15) j ob POSI POSI TI ON( 17: 17: 25) 25) mgr POSI POSI TI ON( 27: 30) s al POSI POSI TI ON( 32: 32: 39) 39) c omm POSI POSI TI ON( 41: 48) dept dept no POSI POSI TI ON( 50: 50: 51) 51) 5)
I NTEGER TEGER CHAR, CHAR, I NTEGER TEGER DECI ECI MAL DECI ECI MAL I NTEGER
EXTER EXTERN NAL, EXTER EXTERN NAL, EXTER EXTERN NAL, EXTER EXTERN NAL, EXTER EXTERN NAL)
Notes: (Do not write the line numbers, they are meant for explanation purpose) 1. 2. 3. 4.
5.
The LOAD DATA statement is required at the beginning of the control file. The name of the file containing data follows the INFILE parameter. The INTO TABLE statement is required to identify the table to be loaded into. Lines 4 and 5 identify a column name and the location of the data in the datafile to be loaded into that column. empno, name, job, and so on are names of columns in table emp. The datatypes (INTEGER EXTERNAL, CHAR, DECIMAL EXTERNAL) identify the datatype of data fields in the file, not o f corresponding columns in the emp table. Note that the set of column specifications specifications is enclosed in parentheses. parentheses.
4. After saving the control file now start SQL Loader utility by typing the following command.
$sql sql l dr user ser i d=scot scot t / t i ger con cont r ol =empf i x. ct l l og=empf i x. l og di r ect =y After you have executed the above command SQL Loader will shows you the output describing how many rows it has loaded.
Loading Data into Multiple Tables using WHEN condition You can simultaneously load data into multiple tables in the same session. You can also use WHEN condition to load only specified rows which meets a particular condition (only equal to “=” and not equal to “<>” conditions are allowed). For example, suppose we have a fixed length file as shown below 7782 CLARK
MANAG ANAGER
7839 KI NG
PRESI DENT ENT
7934 MI LL ER
CLERK
7839
7782
2572. 50
10
5500. 5500. 00
10
920. 00
10
43
7566 J ONES
MANAG ANAGER
7839
3123. 75
20
7499 ALL ALLEN EN
SALESMAN SALESMAN 7698
1600. 00
300. 00 30
7654 MARTI ARTI N
SALESMAN SALESMAN 7698
1312. 50
1400. 00 30
7658 CHAN CHAN
ANALY ANALYST ST
7566
3450. 00
20
7654 MARTI ARTI N
SALESMAN SALESMAN 7698
1312. 50
1400. 00 30
dept no is 10 into emp1 table and those employees whose Now we want to load all the employees whose who se dept dept dept no is not equal to 10 in emp2 table. To do this first create the tables emp1 and emp2 by taking appropriate columns and datatypes. Then, write a control file as shown below $vi emp_mul t i . ct l Load Dat Dat a i nf i l e ‘ / u01/ or ac a c l e/ empf i x. dat ’ app append end i nt o t abl abl e scot scot t . emp1 WHEN ( dept dept no=’ 10 ‘ ) ( empno POSI POSI TI ON( 01: 04) name POSI POSI TI ON( 06: 15) j ob POSI POSI TI ON( 17: 17: 25) 25) mgr POSI POSI TI ON( 27: 30) s al POSI POSI TI ON( 32: 32: 39) 39) c omm POSI POSI TI ON( 41: 48) dept dept no POSI POSI TI ON( 50: 50: 51) 51) I NTO TAB TABLE scot sc ot t . emp2 WHEN ( dept dept no<>’ 10 ‘ ) ( empno POSI POSI TI ON( 01: 04) name POSI POSI TI ON( 06: 15) j ob POSI POSI TI ON( 17: 17: 25) 25) mgr POSI POSI TI ON( 27: 30) s al POSI POSI TI ON( 32: 32: 39) 39) c omm POSI POSI TI ON( 41: 48) dept dept no POSI POSI TI ON( 50: 50: 51) 51)
I NTEGER TEGER CHAR, CHAR, I NTEGER TEGER DECI ECI MAL DECI ECI MAL I NTEGER I NTEGER TEGER CHAR, CHAR, I NTEGER TEGER DECI ECI MAL DECI ECI MAL I NTEGER
EXTER EXTERN NAL, EXTER EXTERN NAL, EXTER EXTERN NAL, EXTER EXTERN NAL, EXTER EXTERN NAL) EXTER EXTERN NAL, EXTER EXTERN NAL, EXTER EXTERN NAL, EXTER EXTERN NAL, EXTER EXTERN NAL)
After saving the file emp_ mul t i .ct l run s ql ql l dr
$sql sql l dr user ser i d=scot scot t / t i ger con cont r ol =emp_mul t i . ct l Conventional Path Load and Direct Path Load. Lo ad. SQL Loader can load the data into Oracle database using Conventional Path method or Direct Path method. You can specify the method by using DI RECT ECT command line option. If you give DI RECT ECT= TRU TRUE then SQL ECT=f al s e, then SQL loader will use Direct Path Loading otherwise, if omit this option or specify DI RECT Loader will use Conventional Path loading method. Conventional Path Conventional path load (the default) uses the SQL I NSERT SERT statement and a bind array buffer to load data into database tables.
44
When SQL*Loader performs a conventional path load, it competes equally with all other processes for buffer resources. resources . This can slow the load significantly. signifi cantly. Extra overhead is added add ed as SQL statements are generated, passed to Oracle, and executed. The Oracle database looks for partially filled blocks and attempts to fill them on each insert. Although appropriate during normal use, this can slow bulk loads dramatically. Direct Path
SERT statement for loading rows. Instead it directly In Direct Path Loading, Oracle will not use SQL I NSERT writes the rows, into fresh blocks beyond High Water Mark, in datafiles i.e. it does not scan for free blocks before high water mark. Direct Path load is very fast because becaus e • •
•
• •
Partial blocks are not used, so no reads are needed to find them, and fewer writes are performed. SQL*Loader need not execute any SQL I NSERT SERT statements; therefore, the processing load on the Oracle database is reduced. A direct path load calls on Oracle to lock tables and indexes at the start of the load and releases them when the load is finished. A conventional path load calls Oracle once for each array of rows to SERT statement. process a SQL I NSERT A direct path load uses multiblock asynchronous I/O for writes to the database files. During a direct path load, processes perform their own write I/O, instead of using Oracle's buffer cache. This minimizes contention with other Oracle users.
Restrictions on Using Direct Path Loads
The following conditions must be satisfied for you to use the direct path load method: • • • •
Tables are not clustered. Tables to be loaded do not have any active transactions pending. Loading a parent table together with a child Table Loading BFI LE columns
45
Export and Import These tools are used to transfer data from one oracle database to another oracle database. You Export tool to export data from source database, and Import tool to load data into the target database. When you export tables from source database export tool will extracts the tables and puts it into the dump file. This dump file is transferred to the target database. At the target database the Import tool will copy the data from dump file to the target database. From Ver. 10g Oracle is recommending to use Data Pump Export and Import tools, which are enhanced versions of original Export and Import tools. The export dump file contains objects in the following order: 1. 2. 3. 4. 5. 6.
Type definitions Table definitions Table data Table indexes Integrity constraints, views, procedures, and triggers Bitmap, function-based, and domain indexes
When you import the tables the import tool will perform the actions in the following order, new tables are created, data is imported and indexes are built, triggers are imported, integrity constraints are enabled on the new tables, and any bitmap, function-based, and/or domain indexes are built. This sequence prevents data from being rejected due to the order in which tables are imported. This sequence also prevents redundant triggers from firing twice on the same data
Invoking Export and Import You can run Export and Import tool in two modes Command Line Mode Interactive Mode When you just type exp or imp at o/s prompt it will run in interactive mode i.e. these tools will prompt you for all the necessary input. If you supply command line arguments when calling exp or imp then it will run in command line mode Command Line Parameters of Export tool
You can control how Export runs by entering the EXP command followed by various arguments. argument s. To specify parameters, you use keywords: k eywords: Format: EXP EXP KEYW KEYWORD=val ue or KEYW KEYWORD=( val ue1, ue1, val ue2, ue2, . . . , val ueN) ueN) Example: EXP SCO S COTT/ TI GER GRA GRAN NTS=Y TABLES=( EMP, DEPT, MGR)
or TABLES LES=( T1: P1, T1: P2) , i f T1 i s pa par t i t i oned t abl e
46
Keyword
Description (Default)
-------------------------------------------------------------USERID
username/password
BUFFER
size of data buffer
FILE
output files (EXPDAT.DMP)
COMPRESS
import into one extent (Y)
GRANTS
export grants (Y)
INDEXES
export indexes (Y)
DIRECT
direct path (N)
LOG
log file of screen output
ROWS
export data rows (Y)
CONSISTENT
cross-table consistency(N)
FULL
export entire file (N)
OWNER
list of owner usernames
TABLES
list of table names
RECORDLENGTH length of IO record INCTYPE
incremental export type
RECORD
track incr. export (Y)
TRIGGERS
export triggers (Y)
STATISTICS
analyze objects (ESTIMATE)
PARFILE
parameter filename
CONSTRAINTS OBJECT_CONSISTENT FEEDBACK
export constraints (Y) transaction set to read only during object export (N)
display progress every x rows (0)
FILESIZE
maximum size of each dump file
FLASHBACK_SCN
SCN used to set session snapshot back to
FLASHBACK_TIME
time used to get the SCN closest to the specified time
47
QUERY
select clause used to export a subset of a table
RESUMABLE
suspend when a space related error is encountered(N)
RESUMABLE_NAME
text string used to identify resumable statement
RESUMABLE_TIMEOUT
wait time for RESUMABLE
TTS_FULL_CHECK
perform full or partial dependency check for TTS
TABLESPACES
list of tablespaces to export
TRANSPORT_TABLESPACE TRANSPORT_TABLES PACE TEMPLATE
export transportable tablespace metadata (N)
template name which invokes iAS mode export
The Export and Import tools support four modes of operation :Exports all the objects in all schemas FULL OWNER NER NER :Exports objects only belonging to the given OWNER :Exports Individual Tables TAB TABLES TAB TABLESPAC LESPACE TABLESPAC LESPACE. :Export all objects located in a given TAB Example of Exporting Full Database
The following example shows how to export full database
$exp $exp USERI ERI D=scot sc ot t / t i ger ger FULL= FULL=y y FI LE=myf ul l . dmp In the above command, FI L E option specifies the name of the dump file, FULL option specifies that you want to export the full database, USER SERI D option specifies the user account to connect to the database. Note, to perform full export the user should have DBA or EXP EXP _ FULL FULL _ DATABASE DATABASE privilege. Example of Exporting Schemas
To export Objects stored in a particular schemas you can run export utility with the following arguments
$exp USER SERI D=s cot t / t i ger OWNER= ER=( SCO SCOTT, ALI ) FI LE=exp_own. exp_own. dmp The above command will export all the objects stored in SCOTT and ALI’s schema. Exporting Individual Tables
To export individual tables give the following command
$exp USERI D=scot t / t i ger TAB TABLE LES S=( scot t . emp, scot t . sal es) FI LE LE= =exp_t xp_t ab. dmp sal es tables. This will export scott’s emp and sal
48
Exporting Consistent Image of the tables
STENT=Y option in export command argument then, Export utility will export a If you include CONSI STENT consistent image of the table i.e. the changes which are done to the table during export operation will not be exported. Using Import Utility Objects exported by export utility can only be imported by Import utility. Import utility can run in Interactive mode or command line mode. You can let Import prompt you for parameters by entering the I MP command followed by your username/password:
Exampl e: I MP SC SCOTT/ TI GER Or, you can control how Import runs by entering the IMP command followed by various arguments. argument s. To specify parameters, you use keywords: k eywords:
For mat :
I MP KEYW EYWORD=val ue or KEY KEYW WORD=( val ue1, ue1, val ue2, ue2, . . . , val ueN ueN)
Exampl e: I MP SCOTT/ TI GER I GNORE=Y TABL TABL ES=( EMP, DEPT) FUL FUL L=N L=N or TABLE LES S=( T1: P1, T1: P2) , i f T1 i s pa par t i t i oned t abl e USERID must be the first parameter on the command line. Keyword
Description (Default) USER SERI D username/password BUFFER size of data buffer FI L E input files (EXPDAT.DMP) SHOW just list file contents conten ts (N) I GNORE ignore create errors (N) GRANTS import grants (Y) I NDEXES EXES import indexes (Y) ROWS import data rows (Y) LOG LOG log file of screen output FULL import entire file (N) FROM ROMUSER USER list of owner usernames TOU TOUSER SER list of usernames TAB TABLES list of table names RECORDLENGTH length of IO record I NCTYPE incremental import type COM COMMI T commit array insert (N) PAR PA RFI LE parameter filename CONSTRAI NTS import constraints (Y) DESTROY overwrite tablespace data file (N) I NDEXFI LE write table/index info to specified file SKI P_ UNUSA SAB BL E_I E_ I NDEXES EXES skip maintenance of unusable indexes (N) FEEDBACK display progress every x rows(0) TOI TOI D_ NOVALI DATE skip validation of specified type ids
49
FI LE LES SI ZE STATI TATI STI CS RESUM RESUMABL ABLE E RESUM RESUMABL ABLE_ E_ NAME RESUMABLE_ ABLE_ TI MEOU EOUT COMPI L E STREAMS_ CONFI GURATI RATI ON STREA STREAM MS_I S_ I NSTAN STANI TATI ON
maximum size of each dump file import precomputed statistics (always) suspend when a space related error is encountered(N) text string used to identify resumable statement wait time for RESUMABLE compile procedures, packages, and functions (Y) import streams general metadata (Y) import streams instantiation metadata (N)
Example Importing Individual Tables
To import individual tables from a full database export dump file give the following command
$i mp scot t / t i ger ger FI LE=myf ul l exp. exp. dmp FRO FROMUSER SER=sc ot t TAB TABLES= LES=( emp, dept dept ) This command will import only emp, dept tables into Scott user and you will get a output similar to as shown below Expo xpor t f i l e cr eat ed by EX EXPORT: V10. 00. 00 vi a con conven vent i ona onal pat h i mpor t done done i n WE8D E8DEC char act er s et and AL16UTF16 L16UTF16 NCHAR char act er s et . i mpor t i ng SC SCOTT' TT' s obj obj ect s i nt o SC SCOTT . . i mpor t i ng t abl abl e
" DEPT" PT"
4 r ows i mpor t ed
. . i mpor t i ng t abl abl e
" EMP"
14 r ows i mpor t ed
I mpor t t er mi nat ed succe successf ssf ul l y wi t hout war ni ngs.
Example, Importing Tables of One User account into another User account
abl es.dmp. Now Scott wants to import For example, suppose Ali has exported tables into a dump file myt abl these tables. To achieve this Scott will give the following import command $i mp s cot t / t i ger ger
FI LE=myt abl abl es. dmp FRO FROMUSER SER=al i TOUSER SER=sc ot t
Then import utility will give a warning that tables in the dump file was exported by user Ali and not you and then proceed. Example Importing Tables Using Pattern Matching
Suppose you want to import all tables from a dump file whose name matches a particular pattern. To do so, use “%” wild character in TAB TABLES option. For example, the following command will import all tables whose names starts with alphabet “e” and those tables whose name contains alphabet “d”
$i mp scot s cot t / t i ger FI LE=myf ul l exp. dmp FRO FROMUSER SER=s cot t TAB TABLES=( a%, %d%) Migrating a Database across platforms. The Export and Import utilities are the only method that Oracle supports for moving an existing Oracle database from one hardware platform to another. This includes moving between UNIX and NT systems and also moving between two NT systems running on different platforms.
50
The following steps present a general overview of how to move a database between platforms. 1. As a DBA user, issue the following SQL query to get the exact name of all tablespaces. You will need this information later in the process.
SQL> SELECT SELECT t abl espace_ es pace_ name FRO FROM dba_t abl espaces es paces;; 2. As a DBA user, perform a full export from the source database, for example:
> exp sys t em/ manager anager FULL FULL= =y FI LE=myf ul l exp. dmp 3. Move the dump file to the target database server. If you use FTP, be sure to copy it in binary format (by entering binary at the FTP prompt) to avoid file corruption. 4. Create a database on the target server. 5. Before importing the dump file, you must first create your tablespaces, using the information obtained in Step 1. Otherwise, the import will create the corresponding datafiles in the same file structure as at the source database, which may not be compatible with the file structure on the target system. 6. As a DBA user, perform a full import with the IGNORE parameter enabled:
> i mp sys s ystt em/ manager FUL FUL L =y I GNORE=y FI L E=myf ul l exp. dmp Using I
GNORE=y instructs Oracle to ignore any creation errors during the import and permit the import to complete.
7. Perform a full backup of your new database. during the import and permit the import to complete.
51
DATA PUMP Utility Starting with Oracle 10g, Oracle has introduced an enhanced version of EXPORT and IMPORT utility known as DATA PUMP. Data Pump is similar to EXPORT and IMPORT utility but it has many advantages. Some of the advantages are: •
•
•
•
•
Most Data Pump export and import operations occur on the Oracle database server. i.e. all the dump files are created in the server even if you run the Data Pump utility from client machine. This results in increased performance because data is not transferred through network.
You can Stop and Re-Start export and import jobs. This is particularly useful if you have started an export or import job and after some time you want to do some other urgent work.
The ability to detach from and reattach to long-running jobs without affecting the job itself. This allows DBAs and other operations personnel to monitor jobs from multiple locations.
The ability to estimate how much space an export job would consume, without actually performing the export
Support for an interactive-command mode that allows monitoring of and interaction with ongoing jobs
Using Data Pump Export Utility To Use Data Pump, Pump, DBA has to create a directory in Server Machine and create a Directory Object in the database mapping to the directory created in the file system. The following example creates a directory in the filesystem and creates a directory object in the database and grants privileges on the Directory Object to the SCOTT user.
$mkdi r my_ dump_di p_ di r $sql sql pl us Ent er User : / as sysdb sysdba SQL>cr eat e di r ect or y dat a_pu a_pump_di r as ‘ / u01/ or acl e/ my_du y_dump_di r ’ ; Now grant access on this directory d irectory object to SCOTT user
SQL> gr ant r ead, wr i t e on on di di r ect or y da dat a_pum _pump_di r t o scot scot t ; Example of Exporting a Full Database To Export Full Database, give the following command
$expd $expdp p
sc ot t / t i ger ger FULL= FULL=y y DI RECTO ECTORY=dat dat a_pump_di r DU DUMPFI LE=f ul l . dmp LOG LOGFI LE=myf ul l exp. exp. l og J OB_ NAME=myf ul l J ob
52
The above command will export the full database and it will create the dump file full.dmp in the directory on the server /u01/oracle/my_dump_dir In some cases where the Database is in Terabytes the above command will not feasible since the dump file size will be larger than the operating system limit, and hence export will fail. In this situation you can create multiple dump files by typing the following command
$expdp $expdp
s cot t / t i ger FULL FULL= =y DI RECTO ECTOR RY=dat a_pu a_ pum mp_di r DU DUMPFI LE=f ul l %U. dmp FI LESI LESI ZE=5G LOG LOGFI LE=myf ul l exp. exp. l og J OB_ NAME=myf ul l J ob
This will create multiple dump files named full01.dmp, full02.dmp, full03.dmp and so on. The FILESIZE parameter specifies how much larger the dump file should sho uld be.
Example of Exporting a Schema To export all the objects of SCOTT’S schema you can run the following export data pump command.
$expd $expdp p s cot t / t i ger ger DI RECTO ECTORY=dat dat a_pump_di r DUMPFI LE=sc ot t _ sc hem hema. dmp SCHEMAS=SCOTT You can omit SCHEMAS since the default mode of Data Pump export is SCHEMAS only. If you want to export objects of multiple schemas you can specify the following command
$expd $expdp p s cot t / t i ger ger DI RECTO ECTORY=dat dat a_pump_di r DUMPFI LE=sc ot t _ sc hem hema. dmp SCHEM SCHEMAS=SCOTT, TT, HR, ALI Exporting Individual Tables using Data Pump Export You can use Data Pump Export utility to export individual tables. The following example shows the syntax to export tables $expdp hr / hr DI RECTO ECTOR RY=dpump_di p_ di r 1 DUMPFI LE=t abl es. es . dmp TAB TABLES=empl oyees, j obs, depar depar t ment s
Expo xpor t i ng Tabl abl es l ocat cat ed i n a Tab Tabl espa espace
If you want to export tables located in a particular tablespace you can type the following command
$expdp hr hr / hr DI RECTO ECTOR RY=dpump_di p_ di r 1 DU DUMPFI LE=t bs. dmp TAB TABLESPAC LESPACES=t bs_ 4, t bs_ 5, t bs_ 6
The above will export all the objects located in tbs_4,tbs_5,tbs_6
53
Excluding and Including Objects during Export You can exclude objects while performing a export by using EXCLUDE option of Data Pump utility. For example you are exporting a schema and don’t want want to export tables whose name starts with “A” then you can type the following command
$expd $expdp p s cot t / t i ger ger DI RECTO ECTORY=dat dat a_pump_di r DUMPFI LE=sc ot t _ sc hem hema. dmp SCH SCHEMAS=SCO SCOTT EXC EXCL UDE=TABL TABL E: ” l i ke ‘ A%’ ” Then all tables in Scott’s Schema whose name starts with “A “ will not be exported. Similarly you can also INCLUDE option to only export certain objects like this
$expd $expdp p s cot t / t i ger ger DI RECTO ECTORY=dat dat a_pump_di r DUMPFI LE=sc ot t _ sc hem hema. dmp SCH SCHEMAS=SCO SCOTT I NCL UDE=TAB TABLE: ” l i ke ‘ A%’ ” This is opposite of EXCLUDE option i.e. it will export only those tables of Scott’s schema whose name starts with “A” Similarly you can also exclude INDEXES, CONSTRAINTS, GRANTS, USER, SCHEMA
Using Query to Filter Rows during Export You can use QUERY option to export only required rows. For Example, the following will export only those rows of employees tables whose salary is above 10000 and whose dept id is 10. expdp hr / hr QUERY= ERY=emp: ' " WHERE ERE dept _ i d > 10 AN AND s al > 10000" 10000"'' NOL OGFI L E=y DI RECTO RECTORY= RY=dpump_ di r 1 DU DUMPFI PF I L E=exp1. dmp
Suspending and Resuming Export Jobs (Attaching and Re-Attaching to the Jobs) You can suspend running export jobs and later on resume these jobs or kill these jobs using Data Pump Export. You can start a job in one client machine and then, if because of some work, you can suspend it. Afterwards when your work has been finished you can continue the job from the same client, where you stopped the job, or you can restart the job from another client machine. For Example, suppose a DBA starts a full database export by typing the following command at one client machine CLNT1 by typing the following command
$expdp $expdp scot s cot t / t i ger @mydb FULL FULL= =y DI RECTO ECTOR RY=dat a_pu a_ pum mp_di r DUMPFI LE=f ul l . dmp LO L OGFI LE=myf ul l exp. exp. l og J OB_ NAME=myf ul l J ob After some time, the DBA wants to stop this job temporarily. Then he presses CTRL+C to enter into interactive mode. Then he will get the Export> prompt where he can type interactive commands Now he wants to stop this export expo rt job so he will type the following followin g command Expor t > STOP_ STOP_ J OB=I MMEDI EDI ATE Ar e you you sur sur e you you wi sh t o st op t hi s j ob ( [ y]/ n) : y
The job is placed in a stopped state and exits the client.
54
After finishing his other work, the DBA wants to resume the export job and the client machine from where he actually started the job is locked because, the user has locked his/her cabin. So now the DBA will go to another client machine and he reattach to the job by typing the following command $expdp hr / hr @mydb ATTA ATTAC CH=myf ul l j ob
After the job status is displayed, he can issue the CONTI NUE_CLI E_ CLI ENT ENT command to resume logging mode and restart the myf ul l j ob job. Expor t > CONTI NUE_C E_ CLI ENT ENT
A message is displayed that the job has been reopened, and processing status is output to the client. Note: After reattaching to the t he Job a DBA can also kill the job by typing t yping KILL_JOB, if he doesn’t want to continue with the export job.
55
Data Pump Import Utility Objects exported by Data Pump Export Utility can be imported into a database using Data Pump Import Utility. The following describes how to use Data Pump Import utility to import objects
Importing Full Dump File If you want to Import all the objects in a dump file then you can type the following command. $i mpdp pdp hr / hr DUMPFI LE=dpum dpump_di r 1: expf expf ul l . dmp FULL= FULL=y y
LOG LOGFI LE= LE=dpump_di r 2: f ul l _i mp. l og
RECTORY RY This example imports everything from the expfull.dmp dump file. In this example, a DI RECTO parameter is not provided. prov ided. Therefore, a directory object ob ject must be provided on both the DUMPFI PF I L E parameter and the LOG LOGFI LE parameter Importing Objects of One Schema to another Schema The following example loads all tables belonging to hr schema to scott schema
$i mpdp SYSTE SYSTEM M/ password DI RECTO RECTORY= RY=dpump_ di r 1 DUMPFI PF I L E=hr . dmp REM REMAP_SC AP_ SCH HEMA=hr : s c ot t
If SCOTT account exist in the database then hr objects will be loaded into scott schema. If scott account does not exist, then Import Utility will create the SCOTT account with an unusable password because, the dump file was exported by the user SYSTEMand imported by the user SYSTEMwho has DBA privileges.
Loading Objects of one Tablespace to another Tablespace. You can use remap_tablespace option to import objects of one tablespace to another tablespace by giving the command $i mpdp SYSTE SYSTEM M/ password DI RECTO RECTORY= RY=dpump_ di r 1 DUMPFI PF I L E=hr . dmp REMAP_ TAB TABLESPACE= LESPACE=user s : s al es
The above exampl e l oads t abl es, s t or ed i n user s t abl espace, i n t he sal es t abl espace.
Generating SQL File containing DDL commands using us ing Data Pump Import
56
You can generate SQL file which contains all the DDL commands which Import would have executed if you actually run Import utility The following is an example of using the SQLFI LE parameter. $ i mpdp hr hr / hr DI RECTO ECTORY=dpum dpump_di r 1 DU DUMPFI LE=expf ul l . dmp SQLFI LE=dpum pump_di r 2: expf expf ul l . sql
A SQL file named expf ul l .s ql is written to dpump _ di r 2.
Importing objects of only a Particular Schema
FULL__ DATAB TABASE role, you can use this parameter to perform a schema-mode import If you have the I MP_ FULL by specifying a single sin gle schema other than your you r own or a list of schemas to import. import . First, the schemas themselves are created (if they do not already exist), including system and role grants, password history, and so on. Then all objects contained within the schemas are imported. Nonprivileged users can specify only their own schemas. In that case, no information about the schema definition is imported, only the objects contained within it. Example
The following is an example of using the SCHEMAS parameter. You can create the expdat.dmp file used in this example by running the example provided for the Export SCHEMAS parameter. $i mpdp hr / hr SCH SCHEMAS=hr , oe DI DI RECTO ECTOR RY=dpump_di p_ di r 1 LOG LOGFI LE=s chemas. as . l og DUMPFI L E=expdat . dmp
The hr and oe schemas are imported from the expdat expdat . dmp file. The log file, schemas. l og, is written to dpum dpump_di r 1
Importing Only Particular Tables
TABLES parameter to import only the empl oyees and The following example shows a simple use of the TAB j obs tables from the expf ul l . dmp file. You can create the expf ul l . dmp dump file used in this example by running the example provided for the Full Database Export in Previous Topic. $i mpdp hr / hr DI RECTO ECTORY=dpum dpump_di r 1 DU DUMPFI LE=expf ul l . dmp TAB TABLES=empl oyees, j obs
This will import only employees and jobs tables from the DUMPFI PF I L E.
Running Import Utility in Interactive Mode Similar to the DATA PUMP EXPORT utility the Data Pump Import Jobs can also be suspended, resumed or killed. And, you can attach to an already existing import job from any client machine. For Example, suppose a DBA starts a importing by typing the following command at one client machine CLNT1 by typing the following command
57
$i mpdp s cot t / t i ger @mydb FULL FULL= =y DI RECTO ECTOR RY=dat a_pum a_ pump_di p_ di r DUMPFI LE=f ul l . dmp LO L OGFI LE=myf ul l exp. exp. l og J OB_ NAME=myf ul l J ob After some time, the DBA wants to stop this job temporarily. Then he presses CTRL +C to enter into interactive mode. Then he will get the Import> prompt where he can type interactive commands Now he wants to stop this export expo rt job so he will type the following followin g command I mpor t > STOP_ STOP_ J OB=I MMEDI EDI ATE Ar e you you sur sur e you you wi sh t o st op t hi s j ob ( [ y]/ n) : y
The job is placed in a stopped state and exits the client. After finishing his other work, the DBA wants to resume the export job and the client machine from where he actually started the job is locked because, the user has locked his/her cabin. So now the DBA will go to another client machine and he reattach to the job by typing the following command $i mpdp hr / hr @mydb ATTA ATTAC CH=myf ul l j ob
After the job status is displayed, he can issue the CONTI NUE _ CLI ENT ENT command to resume logging mode and restart the myf ul l j ob job. I mpor t > CONTI NUE_C E_ CLI ENT ENT
A message is displayed that the job has been reopened, and processing status is output to the client.
J OB, if he doesn’t want to Note: After reattaching to the t he Job a DBA can also kill the job by typing t yping KI LL _ continue with the import job.
58
Flash Back Features From Oracle Ver. 9i Oracle has introduced Flashback Query feature. It is useful to recover from accidental statement failures. For example, suppose a user accidently deletes rows from a table and commits it also then, using flash back query he can get back the rows. Flashback feature depends upon on how much undo retention time you have specified. If you have set the UNDO_RETENTION parameter to 2 hours then, Oracle will not overwrite the data in undo tablespace even after committing until 2 Hours have passed. Users can recover from their mistakes made since last 2 hours only. For example, suppose John gives a delete statement at 10 AM and commits it. After 1 hour he realizes that delete statement is mistakenly performed. Now he can give a flashback AS. . OF query to get back the deleted rows like this.
Flashback Query
SQL>sel ect * f r om emp as of t i mest amp sysdat sysdat e- 1/ 24; Or
SQL> SQL> SEL SEL ECT * FROM ROM emp AS OF TIMESTAMP TIMESTAMP TO_ TO_ TI MESTA ESTAM MP( ' 2007 2007-- 0606- 07 10: 10: 00: 00: 00' 00' , ' YYYYYYYY- MM- DD HH: MI : SS' SS' ) To insert the accidently deleted rows again in the table he can type
SQL> i nser t i nt o em emp ( sel ect * f r om emp as of of t i mest amp sysda sysdat e- 1/ 24) Using Flashback Version Query You use a Flashback Version Query to retrieve the different versions of specific rows that existed during a given time interval. A new row version is created whenever a COMMIT statement is executed. The Flashback Version Query returns a table with a row for each version of the row that existed at any time during the time interval you specify. Each row in the table includes pseudocolumns of metadata about the row version. The pseudocolumns available are VERSIONS_XID VERSIONS_OPERATION VERSIONS_STARTSCN VERSIONS_STARTTIME VERSIONS_ENDSCN VERSIONS_ENDTIME
:Identifier of the transaction that created the row version :Operation Performed. I for Insert, U for Update, D for Delete Delete :Starting System Change Number when the row version was created :Starting System Change Time Time when the row version was created :SCN when the row version expired. :Timestamp when the row version expired
To understand let’s see the following example Before Starting this example let’s us collect the Timestamp
SQL> sel ect t o_char o_char ( SYSTI YSTI MESTAM TAMP, ’ YYYY YYYY-- MM- DD HH: MI : SS’ ) f r om dual ;
59
TO_ TO_ CHAR( SYSTI SYSTI MESTA ESTAM MP, ’ YYYYY YYYYY --------------------------2007 2007-- 0606- 19 20: 20: 30: 43 Suppose a user creates a emp table and inserts a row into it and commits the row.
SQL> Cr eat e t abl abl e em emp ( empno pno num number ber ( 5) , nam name var var char 2( 20) 20) , s al number ( 10, 2) ) ; SQL> i nser t i nt o em emp va val ues ( 101, ’ Sami ’ , 5000) ; SQL SQL >c ommi t ; At this time emp table has one version of one row. Now a user sitting at another anoth er machine erroneously changes c hanges the Salary from 5000 to t o 2000 using Update statement
SQL> updat updat e emp set s et s al =s al - 3000 3000 wher wher e empno= pno=101; 101; SQL SQL > c ommi t ; Subsequently, a new transaction updates the name of the employee from Sami to Smith. Smith.
SQL SQL >updat e emp set s et name=’ Smi t h’ wher e empno=101; SQL SQL > c ommi t ; At this point, the DBA detects the application error and needs to diagnose the problem. The DBA issues the following query to retrieve versions of the rows in the emp table that correspond to empno 101. The query uses Flashback Version Query pseudocolumns
SQL> SQL> SQL> SQL>
Connect onnect / as s ysdb ys dba a col umn ver ver si ons_ ons_ st ar t t i me f or mat a16 a16 col umn ver ver s i ons_ ons_ endt endt i me f or mat a16 a16 set l i nesi ze 12 120;
SQL> sel sel ect ve ver si ons_xi d, ver si ons_st ar t t i me, ver ver si ons_end s_endt i me, ver ver si ons_ oper at i on, empno, name, sal f r om emp ver ver si ons ons bet ween een t i mest amp t o_t i mest amp( ‘ 2007- 06- 19 20 20: 30: 00’ , ’ yyyy yyyy-- mm- dd hh hh: mi : ss’ ) and and t o_t i mest amp( ‘ 2007- 06- 19 21: 00: 00’ , ’ yyyy yyyy-- mm- dd hh: mi : ss ’ ) ; VERSI ON_ XI D ----------0200100020D 0200100020D 02001003C02 0002302C 0002302C03A 03A
V U U I
STARTSCN STARTSCN -------11323 11345 12320
ENDSCN ------
EMPNO ----101 101 101
NAME SAL -------- ---SMI TH 2000 SAMI 2000 SAM SA MI 5000
The Output should be read from bottom to top, from the output we can see that an Insert has taken place and then erroneous update has taken place and then again update has taken place to change the name. The DBA identifies the transaction 02001003C02 as erroneous and issues the following query to get the SQL command to undo the change
60
SQL> sel ect oper at i on, l ogon_user _user , undo_sql f r om f l ashb shback_t ck_t r ansact sact i on_qu _quer y wher e xi d=HEXTO EXTOR RAW( ’ 02001003 02001003C C02’ ) ; OPERATI ERATI ON L OGON_ USER UNDO_ SQL SQL - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -- - - - - - - - - - - - -- - - - - - - - - - - - U
SCOTT
updat e emp s et s al =5000 wher e ROWI D = ' AA AAAKD AKD2AABA 2AABAAA AAJJ 29AAA' 29AAA'
Now DBA can execute the command to undo the changes made by the user use r SQL SQL > updat e emp set s et s al =5000 wher e ROW ROWI D =' AA AAAKD AKD2AABA 2AABAAA AAJJ 29AAA' 29AAA' 1 r ow updat updat ed
Using Flashback Table to return Table to Past States. Oracle Flashback Table provides the DBA the ability to recover a table or set of tables to a specified point in time in the past very quickly, easily, and without taking any part of the database offline. In many cases, Flashback Table eliminates the need to perform more complicated point-in-time recovery operations. Flashback Table uses information in the undo tablespace to restore the table. Therefore, UNDO_RETENTION parameter is significant in Flashing Back Tables to a past state. You can only flash back tables up to the th e retention time you specified. speci fied. Row movement must be enabled on the table for which you are issuing the FLASHBACK TABLE statement. You can enable row movement with the following SQL statement:
ALTER ALTER TABL TABL E t abl e EN ENAB ABL L E ROW MOVEMENT; ENT; The following example performs a FLASHBACK TABLE operation the table emp
FL ASHBACK ASHBACK TABLE emp TO TI MESTAM EST AMP TO_ TO_ TI MESTA ESTAMP( ' 2007 2007-- 0606- 19 09: 09: 30: 30: 00' 00' , ` YYYYYYYY- MM- DD HH24: 24: MI : SS' ) ;
The emp table is restored to its state when the database was at the time specified by the timestamp.
Example:At 17:00 an HR administrator discovers that an employee "JOHN" is missing from the EMPLOYEE table. This employee was present at 14:00, the last time she ran a report. Someone accidentally deleted the record for "JOHN" between 14:00 and the present time. She uses Flashback Table to return the table to its state at 14:00, as shown in this example: FL ASHBAC ASHBACK K TABLE TABL E EM EMPLO PL OYEES TO TI MESTAM ESTAMP TO_ TI MESTA ESTAMP( ' 2007 2007-- 0606- 21 14: 14: 00: 00: 00' 00' , ' YYYYYYYY- MM- DD HH: MI : SS' )
61
ENABLE ENABLE TRI GGERS;
You have to give ENAB ENABL LE TRI GGERS option otherwise, by default all database triggers on the table will be disabled.
Recovering Drop Tables (Undo Drop Table) In Oracle Ver. 10g Oracle introduced the concept of Recycle Bin i.e. whatever tables you drop the database does not immediately remove the space used by table. Instead, the table is renamed and placed in Recycle Bin. The FLASHBACK TABLE…BEFORE DROP command will restore the table. This feature is not dependent on UNDO TABLESPACE so UNDO_ RETEN RETENTI ON parameter has no impact on this feature. For Example, suppose a user accidently drops emp table
SQL SQL >dr op t abl e emp; Tabl Tabl e Dr opped opped Now for user it appears that tha t table is dropped but it is actually a ctually renamed and placed pla ced in Recycle Bin. To recover this dropped table a user can type the command
SQL> Fl ashback ashback t abl abl e emp t o bef bef or e dr dr op; op; You can also restore the dropped table by giving it a different name like this
SQL> Fl ashback t abl e emp t o bef or e dr op r enam ename t o emp2; p2; Purging Objects from Recycle Bin
If you want to recover the space used by a dropped table give the following command
SQL SQL > pur ge t abl e emp; If you want want to purge objects of logon user give the following command
SQL> pur pur ge r ecycl e bi bi n; If you want to recover space for dropped object of a particular tablespace give the command
SQL> pur pur ge t abl abl espace hr hr ; You can also purge only objects from a tablespace belonging to a specific user, using the following form of the command:
SQL SQL >PU PUR RGE TABL TABL ESPACE ESPACE hr USER s c ot t ; If you have the SYSDBA privilege, then you can purge all objects from the recycle bin, regardless of which user owns the objects, using this command:
62
SQL SQL >PURGE DBA_RECYCL BA_ RECYCLEBI EBI N;
To view the contents of Recycle Bin give the following command
SQL> s how how r ecycl e bi bi n; Permanently Dropping Tables If you want to permanently drop tables without putting it into Recycle Bin drop tables with purge command like this SQL> dr op t abl e emp pur pur ge;
This will drop the table permanently and it cannot be restored. Flashback Drop of Multiple Objects With the Same Original Name
You can create, and then drop, several objects with the same original name, and they will all be stored in the recycle bin. For example, consider these SQL statements: CREATE CREATE TABLE EMP (
...columns
) ; # EMP ver ver si on 1
...columns
) ; # EMP ver ver si on 2
...columns
) ; # EMP ver ver si on 3
DROP TABLE EMP; CREATE CREATE TABLE EMP ( DROP TABLE EMP; CREATE CREATE TABLE EMP ( DROP TABLE EMP;
In such a case, each table EMP is assigned a unique name in the recycle bin when it is dropped. You can use a FL ASHBA ASHBACK CK TABL TABL E. . . TO BEFORE BEFORE DROP statement with the original name of the table, as shown in this example: FLASHBACK TABLE EMP TO BEFORE DROP;
The most recently dropped table with that original name is retrieved from the recycle bin, with its original name. You can retrieve it and assign it a new name using a RENAME TO clause. The following example shows the retrieval from the recycle bin of all three dropped EMP tables from the previous example, with each assigned a new name: FL ASHBACK TABLE TABLE EMP EMP TO TO BEFO BEF ORE DRO DROP P REN RENAME TO TO EMP_ VER_ 3; FL ASHBACK TABLE TABLE EMP EMP TO TO BEFO BEF ORE DRO DROP P REN RENAME TO TO EMP_ VER_ 2; FL ASHBACK TABLE TABLE EMP EMP TO TO BEFO BEF ORE DRO DROP P REN RENAME TO TO EMP_ VER_ 1;
63
Important Points: 1. There is no guarantee that objects will remain in Recycle Bin. Oracle might empty recycle bin whenever Space Pressure occurs i.e. whenever tablespace becomes full and transaction requires new extents then, oracle will delete objects from recycle bin 2. A table and all of its dependent objects (indexes, LOB segments, nested tables, triggers, constraints and so on) go into the recycle bin together, when you drop the table. Likewise, when you perform Flashback Drop, the objects o bjects are generally all al l retrieved together. 3. There is no fixed amount of space allocated to the recycle bin, and no guarantee as to how long dropped objects remain in the recycle bin. Depending upon system activity, a dropped object may remain in the recycle bin for seconds, or for months.
Flashback Database: Alternative to Point-In-Time Recovery Oracle Flashback Database, lets you quickly recover the entire database from logical data corruptions or user errors. To enable Flashback Database, you set up a flash recovery area, and set a flashback retention target, to specify how far back into the past you want to be able to restore your database with Flashback Database. Once you set these parameters, From that time on, at regular intervals, the database copies images of each altered block in every datafile into flashback logs stored in the flash recovery area. These Flashback logs are use to flashback database to a point in time.
Enabling Flash Back Database Step 1. Shutdown the database if it is already running and set the following parameters
DB_ RECOVERY_FI ERY_FI LE_DES LE_DEST= T=/ d01/ d01/ i ca/ f l ashar ashar ea DB_R B_ RECO ECOVERY_ VERY_ FI L E_D E_ DEST_ SI ZE=10G DB_F B_ FL ASHBA ASHBACK_ CK_RETEN RETENTI TI ON_ TARGET=4320 ET=4320
(Note: the db_flashback_retention_target is specified in minutes here we have specified specified 3 days i.e. 3x24x60=4320)
Step 2. Start the instance and mount the Database. SQL>startup mount; Step 3. Now enable the flashback database by giving the following command SQL>alter database flashback on; Now Oracle start writing Flashback Flashba ck logs to recovery area. To how much size we should set the flash recovery area.
After you have enabled the Flashback Database feature and allowed the database to generate some flashback logs, run the following query:
64
SQL SQL > SELECT SEL ECT ESTI MATED_ ATED_ FLASH FL ASHBA BACK CK__ SI ZE FROM FROM V$FLASH V$FL ASHBA BACK CK__ DATABA ATABASE_ SE_ L OG;
This will show how much size the recovery area should be set to. How far you can flashback database.
To determine the earliest SCN and earliest Time you can Flashback your database, give the following query: SELECT SEL ECT OL DEST_ FL ASHBA ASHBACK CK__ SCN, SCN, OL DEST_ FL ASHBA ASHBACK CK__ TI ME FROM V$FLASH V$FL ASHBAC BACK_ K_D DATABASE_ ATABASE_ L OG;
Example: Flashing Back Database to a point in time
Suppose, a user erroneously drops a schema at 10:00AM. You as a DBA came to know of this at 5PM. Now since you have configured the flashback area and set up the flashback retention time to 3 Days, you can flashback the database to 9:50AM by following the given procedure
1.
Start RMAN
$r man t ar get / 2.
Run the FLASHBACK DATABASE command to return the database to 9:59AM by typing the following command
RMAN>FLASHB FLASHBACK DATAB TABASE TO TO TI ME t i mest amp( ' 2007 2007-- 0606- 21 09: 09: 59: 59: 00' 00' ) ; or, you can also type this command. RMAN> AN> FL ASHBA ASHBACK CK DATA DATABA BASE SE TO T O TI ME ( SYSDATESYSDATE- 8/ 24) ;
3. When the Flashback Database operation completes, you can evaluate the results by opening the database read-only and run some queries to check whether your Flashback Database has returned the database to the desired state. RMAN> AN> SQL SQL ' ALTE ALTER R DATABASE DATABASE OPE OPEN N READ ONL Y' ;
At this time, you have several options
Option 1:-
65
If you are content with your result you can open the database by performing ALTER
DATABASE OPEN RESETLOGS
SQL>ALTER DATABASE OPEN RESETLOGS;
Option 2:If you discover that you have chosen the wrong target time for your Flashback Database operation, DATABASE UNTI UNTI L to bring the database forward, or perform you can use RECOVER DATABASE FLASHBACK DATABASE again with an SCN further in the past. You can completely undo the effects of your flashback operation by performing complete recovery of the database:
RMAN> RECOVER DATABASE;
Option 3:If you only want to retrieve some lost data from the past time, you can open the database read-only, then perform a logical export of the data using an Oracle export utility, then run RECOVER DATABASE to return the database to the present time and re-import the data using the Oracle import utility 4.
Since in our example only a schema is dropped and the rest of database is good, third option is relevant for us. Now, come out of RMAN and run EXPORT utility to export the whole schema
$exp $exp 5.
user i d=s yst em/ manag anager er f i l e=sc ot t . dmp
owner ner =SCO SCOTT
Now Start RMAN and recover database to the present time
$r man t ar get / RMAN> RECOVER DATABASE; 6.
After database is recovered shutdown and restart the database in normal mode and import the schema by running I MPORT PORT utility
$i mp user i d=syst sys t em/ manag anager er f i l e=scot sc ot t . dmp
66
Log Miner Using Log Miner utility, you can query the contents of online redo log files and archived log files. Because LogMiner provides a well-defined, easy-to-use, and comprehensive relational interface to redo log files, it can be used as a powerful data audit tool, as well as a tool for sophisticated data analysis.
LogMiner Configuration There are three basic objects in a LogMiner configuration that you should be familiar with: the source database, the LogMiner dictionary, and the redo log files containing the data of interest: •
•
The source database is the database that produces all the redo log files that you want LogMiner to analyze. The LogMiner dictionary allows LogMiner to provide table and column names, instead of internal object IDs, when it presents the redo log data that you request.
LogMiner uses the dictionary to translate internal object identifiers and datatypes to object names and external data formats. Without a dictionary, LogMiner returns internal object IDs and presents data as binary data. For example, consider the following the SQL statement:
I NSER SERT I NTO HR. J OBS( J OB_ I D, J OB_ TI TLE, MI N_ SA SALA LAR RY, MAX_ SA SALA LAR RY) VALU LUE ES( ' I T_WT' , ' Techn chni cal cal Wr i t er ' , 4000, 11000) ;
Without the dictionary, LogMiner will display: i nser t i nt o " UNKNOWN" . " OBJ # 455 45522" 22" ( " COL 1", " COL 2", " COL 3", " COL 4") val ues ues ( HEXTO EXTOR RAW( ' 45465f 45465f 4748' 4748' ) , HEXTO EXTOR RAW( ' 546563 546563686 686e696 e696361 3616c2057 6c2057726 726974 974657 6572' 2' ) , HEXTO EXTOR RAW( ' c229' c229' ) , HEXTO EXTOR RAW( ' c3020b' c3020b' ) ) ;
•
The redo log files contain the changes made to the database or database dictionary.
LogMiner Dictionary Options LogMiner requires a dictionary to translate object IDs into object names when it returns redo data to you. LogMiner gives you three options for supplying the dictionary: •
Using the Online Catalog
Oracle recommends that you use this option when you will have access to the source database from which the redo log files were created and when no changes to the column definitions in the tables of interest are anticipated. This is the most efficient and easy-to-use option. •
Extracting a LogMiner Dictionary to the Redo Log Files
67
Oracle recommends that you use this option when you do not expect to have access to the source database from which the redo log files were created, or if you anticipate that changes will be made to the column definitions in the tables of interest. •
Extracting the LogMiner Dictionary to a Flat File
This option is maintained for backward compatibility with previous releases. This option does not guarantee transactional consistency. Oracle recommends that you use either the online catalog or extract the dictionary from redo log files instead.
Using t he Online Catalog Catalog To direct LogMiner to use the dictionary currently in use for the database, specify the online catalog as your dictionary source when you start LogMiner, as follows: SQL> EXECU EXECUTE TE DBM DBMS_ L OGMNR. START_ START_ L OGMNR( OPTI PTI ONS => DBMS_ L OGMNR. DI CT_FRO CT_ FROM M_ ONL I NE_ CATAL CATALO OG) ;
Extracti ng a LogMiner Diction ary to the Redo Redo Log Fil es
ARCHI VELOG VELOG To extract a LogMiner dictionary to the redo log files, the database must be open and in ARC mode and archiving must be enabled. While the dictionary is being extracted to the redo log stream, no DDL statements can be executed. Therefore, the dictionary extracted to the redo log files is guaranteed to be consistent (whereas the dictionary extracted to a flat file is not). R_ D. BU BUII L D procedure with To extract dictionary information to the redo log files, use the DBMS_ L OGMNR_D the STORE_ STORE_II N_ REDO EDO_ L OGS option. Do not specify a filename or location. SQL SQL > EXECU EXECUTE DBMS_ L OGMNR_D R_ D. BU BUII L D( OPTI PTI ONS=> DBMS_ L OGMNR_D R_ D. STORE_ STORE_II N_ REDO REDO_ L OGS) ;
Extracting the LogMiner Dictionary to a Flat File When the LogMiner dictionary is in a flat file, fewer system resources are used than when it is contained in the redo log files. Oracle recommends that you regularly back up the dictionary extract to ensure correct analysis of older redo log files. 1. Set the initialization parameter, UTL_FI TL_FI LE_D LE_DII R, in the initialization parameter file. For example, to set UTL_FI LE_D LE_DII R to use / or acl e/ dat abase as the directory where the dictionary file is placed, enter the following in the initialization parameter file: UTL_FI LE_D LE_DII R = / or acl e/ dat abase abase
2. Start the Database SQL> startup
R_ D. BU BUII L D. Specify a filename for the dictionary 3. Execute the PL/SQL procedure DBMS_ L OGMNR_D and a directory path name for the file. This procedure creates the dictionary file. For example, enter the following to create the file di c t i onar y. or a in / or acl e/ dat abase: SQL > EXECU ECUTE DBMS_LOG _LOGMNR_D. _D. BUI LD( LD( ' di ct i onar onar y. or a' , ' / or acl e/ dat abase abase// ' ,
68 DBMS_LO S_ LOG GMNR_ D. STOR STORE_I E_ I N_ FLAT_FI FL AT_FI LE) ;
Redo Log File Options To mine data in the redo log files, LogMiner needs information about which redo log files to mine. You can direct LogMiner to automatically and dynamically create a list of redo log files to analyze, or you can explicitly specify a list of redo log files for LogMiner to analyze, as follows:
Au t om ati cal l y If LogMiner is being used on the source database, then you can direct LogMiner to find and create a list of redo log files for analysis automatically. Use the CONTI NUOUS_ MI NE option when you start LogMiner.
Manually Use the DBMS_ L OGMNR. AD ADD D_ L OGFI L E procedure to manually create a list of redo log files before you start LogMiner. After the first redo log file has been added to the list, each subsequently added redo log file must be from the same database and associated with the same database RESETLOGS SCN. When using this method, LogMiner need not be connected to the source database.
Example: Finding All Modifications in the Current Redo Log File The easiest way to examine the modification history of a database is to mine at the source database and use the online catalog to translate the redo log files. This example shows how to do the simplest analysis using LogMiner.
Step 1 Specify the list of redo log files to be analyzed. Specify the redo log files which you want to analyze. SQL SQL > EXECUTE EXECUTE DBMS_ L OGMNR. ADD_ L OGFI L E( LOGFI LEN LENAME => ' / usr / or acl e/ i ca/ ca/ l og1. or a' , OPTI ONS => DBMS_ L OGMNR. NEW) ;
SQL SQL > EXECUTE EXECUTE DBMS_ L OGMNR. ADD_ L OGFI L E( LOG LOGFI LEN LENAME => ' / u01/ oracl e/ i ca/ ca/ l og2. or a' , OPTI PTI ONS => DBMS_ L OGMNR. ADDFI L E) ;
Step 2 Start LogMiner. Start LogMiner and specify the dictionary to use. SQL> EXECUTE DBMS_ L OGMNR. START_ START_ L OGMNR( OPTI PTI ONS => DBMS_ L OGMNR. DI CT_FRO CT_ FROM M_ ONL I NE_ CATAL CATALO OG) ;
Step 3 Query the V$LOGMNR_CONTENTS view.
69
Note that there are four transactions tr ansactions (two of them were committed committe d within the redo log file being be ing analyzed, and two were not). The output shows the DML statements in the order in which they were executed; thus transactions interleave among themselves. SQL> SELEC ELECT user name AS USR, ( XI DUSN | | ' . ' | | XI DSLT | | ' . ' | | XI DSQN) AS XI D, SQL SQL _ RED REDO, SQL_ SQL_ UNDO FROM FROM V$ V$L L OGMNR_ CONTENTS TENTS WHERE ERE user name I N ( ' HR' , ' OE' ) ;
USR ---HR
XI D --------1. 11. 11. 147 1476
SQL SQL _ REDO REDO SQL SQL _ UNDO ------------------ ----------------- ----------------set t r ansact ansact i on r ead ead wr i t e;
HR
1. 11. 11. 1476 1476
i nsert i nto " HR" . " EMPLOYEES YEES"" ( " EMPLOYEE PLOYEE_I _I D" , " FI RST_ ST_ NAME", " LAST_ LAST_N NAME", E" , " EMAI L" , " PHO PHONE_NUM E_NUMBER" ER" , " HI RE_DA E_DATE", TE" , " J OB_I D" , " SALA SALAR RY", " COMMI SSI ON_ PCT", " MANAGER_I D" , " DEPAR EPARTMENT_I ENT_I D" ) val ues ( ' 306' , ' Moham ohammed' ed' , ' Sami ' , ' MDSAMI ' , ' 123 123456 456789 7890' , TO_ DATE( ' 1010- j an- 2003 2003 13: 34: 43' , ' dd- monon- yyyy yyyy hh24: 24: mi : ss' ) , ' HR_REP _REP'' , ' 120000' , ' . 05' , ' 105' 105' , ' 10' 10' ) ;
OE
1. 1. 1484 484
set t r ansact ansact i on r ead ead wr i t e;
OE
1. 1. 1484
updat e " OE" . " PRODUCT_I NFORM FORMATI ON" s et " WARRA ARRAN NTY_ PERI OD" = TO_ YMI NTERV TERVAL( ' +05- 00' 00' ) wher e " PRO PRODUCT_I D" = ' 1799 1799'' and and " WARRA ARRAN NTY_ PERI OD" = TO_ YMI NTERV TERVAL( ' +01- 00' 00' ) and ROWI D = ' AAAH AAAHTKAABA TKAABAAA AAY9m Y9mAAB' AAB' ;
updat e " OE" . " PRODUCT_I NFORM FORMATI ON" s et " WARRA ARRAN NTY_ PERI OD" = TO_ YMI NTERV TERVAL( ' +01- 00' ) wher e " PRO PRODUCT_I D" = ' 1799 1799'' and and " WARRA ARRAN NTY_ PERI OD" = TO_ YMI NTERV TERVAL( ' +05- 00' ) and ROWI D = ' AAAH AAAHTKAABA TKAABAAA AAY9m Y9mAAB' AAB' ;
OE
1. 1. 1484
updat e " OE" . " PRODUCT_I NFORM FORMATI ON" s et " WARRA ARRAN NTY_ PERI OD" = TO_ YMI NTERV TERVAL( ' +05- 00' 00' ) wher e " PRO PRODUCT_I D" = ' 1801 1801'' and and " WARRA ARRAN NTY_ PERI OD" = TO_ YMI NTERV TERVAL( ' +01- 00' 00' ) and ROWI D = ' AAAH AAAHTKAAB TKAABAA AAAY9m AY9mAAC' AAC' ;
updat e " OE" . " PRODUCT_I NFORM FORMATI ON" s et " WARRA ARRAN NTY_ PERI OD" = TO_ YMI NTERV TERVAL( ' +01- 00' ) wher e " PRO PRODUCT_I D" = ' 1801 1801'' and and " WARRA ARRAN NTY_ PERI OD" = TO_ YMI NTERV TERVAL( ' +05- 00' ) and ROWI D =' AAA AAAH HTKAABAAA TKAABAAAY9m Y9mAAC' AAC' ;
HR
1. 11. 11. 1476 1476
i nsert i nto " HR" . " EMPLOYEES YEES"" ( " EMPLOYEE_I YEE_I D" , " FI RST_N ST_NA AME", "LAST_N "LAST_NA AME", "EMAI L", " PHO PHONE_NU E_NUMBER" ER" , " HI RE_DA E_DATE" TE" , "J OB_I D", "SALA "SALAR RY", " COMMI SSI ON_ PCT", " MANAGER_I D" , " DEPAR EPARTMENT_I ENT_I D" ) val ues
( ' 307' , ' J ohn' , ' Si l ver ' , ' J SI LVER LVER'' , ' 55511 55111 1222 2222' , TO_ DATE( ' 1010- j an- 2003 2003 13: 41: 03' ,
del del et e f r om " HR" . " EMPLOYEES YEES"" wher her e " EMPLOYEE PLOYEE__ I D" = ' 306' 306' and " FI RST_ RST_N NAME" = ' Mohammed' and " LAST_NA LAST_NAM ME" = ' Sami ' and and " EMAI L" = ' MDSAM SAMI ' and " PHO PHONE_N E_ NUMBER" BER" = ' 12345678 1234567890' 90' and " HI RE_DA E_DATE" = TO_DA _DATE( ' 1010- J AN- 2003 2003 13: 34: 43' , ' dd- monon- yyyy yyyy hh24: mi : ss' ) and and " J OB_I D" = ' HR_REP' _REP' and and " SALAR SALARY" Y" = ' 120000 120000'' and "COMMI SSI ON_PCT" _PCT" = ' . 05' and and "DE "DEPAR PARTMENT_I D" = ' 10' and and ROWI D = ' AAAHSkAA SkAABAAAY6r Y6r AAO' ;
del del et e f r om " HR" . " EMPLOYEES YEES"" " EMPLOYEE_I YEE_I D" = ' 307' 307' and and "FI RST_N ST_NA AME" = ' J ohn ohn' and and " LAST LAST_N _NA AME" = ' Si l ver' and and "EMAI L" = ' J SI LVER LVER'' and and " PHO PHONE_N E_ NUMBER" BER" = ' 55511122 5551112222' 22' and " HI RE_DA E_DATE" = TO_DA _DATE( ' 1010- j an2003 13: 41: 03' , ' dd- mon- yyyy yyy hh24: mi : ss' ) and and "J OB_I D" =' 105 105' and and " DEPARTM EPARTMENT_ ENT_ I D" = ' 50' and ROWI D = ' AAAH AAAHSkAABA SkAABAAA AAY6r Y6r AAP' AAP' ;
' dd- monon- yyyy yyyy hh24: 24: mi : ss' ) , ' SH_CLE _CLER RK' , ' 110000' , ' . 05' , ' 105' , ' 50' ) ; OE
1. 1. 1484 1484
HR
1. 15. 15. 148 1481
HR
1. 15. 15. 1481 1481
commi t ; set t r ansact ansact i on r ead ead wr i t e; del del et e f r om " HR" . " EMPLOYEES YEES"" where " EMPLOYEE PLOYEE_I _I D" = ' 205' 205' and and " FI RST_NA ST_NAME" = ' Shel Shel l ey' and "LAST_N "LAST_NA AME" = ' Hi ggi ggi ns' and and " EMAI L" = ' SHI GGI NS' and and " PHO PHONE_NUM E_NUMBER" ER" = ' 515. 515. 123. 123. 8080' 8080' and "HI "HI RE_DA _DATE" = TO_DA _DATE( ' 07- j un- 1994 10: 05: 01' , ' dd- monon- yyyy yyyy hh24: 24: mi : ss' ) and and " J OB_I D" = ' AC_MG _MGR' and and "SALA "SALAR RY"= ' 12000' and " COMMI SSI ON_ PCT" I S NU NULL
i nsert i nt o " HR" . " EMPLOYEES YEES"" ( " EMPLOYEE PLOYEE_I _I D" , " FI RST_NA ST_NAME", " LAST_N LAST_NA AME", " EMAI L" , " PHO PHONE_NU E_NUMBER" ER" , "HI "HI RE_DA E_DATE" TE" , " J OB_I D", "SALA "SALAR RY", " COMMI SSI SSI ON_PCT", " MANAGER_I ER_I D" , " DEPAR EPARTMENT_I ENT_I D" ) val ues ( ' 205' , ' Shel l ey' ey' , ' Hi ggi ns' , and and ' SHI GGI NS' , ' 515. 123. 8080' , TO_DA _DATE( TE( ' 0707- j un- 1994 994 10: 05: 01' , ' dd- monon- yyyy yyyy hh24 h24: mi : ss' ) , ' AC_MGR' , ' 12000' , NULL, ' 101' , ' 110' ) ;
70 and " MANAG ANAGER_I D" = ' 101' 101' and " DEPAR EPARTMENT_I ENT_I D" = ' 110' and RO ROWI D = ' AAAH AAAHSkAABAA SkAABAAAY6r AY6r AAM AAM' ; OE
1. 8. 1484 484
set t r ansact ansact i on r ead ead wr i t e;
OE
1. 8. 1484
updat e " OE" . " PRODUCT_I NFORM FORMATI ON" s et " WARRA ARRAN NTY_ PERI OD" = TO_ YMI NTERV TERVAL( ' +12- 06' 06' ) wher e " PRO PRODUCT_I D" = ' 2350 2350'' and and " WARRA ARRAN NTY_ PERI OD" = TO_ YMI NTERV TERVAL( ' +20- 00' 00' ) and ROWI D = ' AAAH AAAHTKAABA TKAABAAA AAY9t Y9t AAD AAD' ;
HR
1. 11. 1476
commi t ;
updat e " OE" . " PRODUCT_I NFORM FORMATI ON" s et " WARRA ARRAN NTY_ PERI OD" = TO_ YMI NTERV TERVAL( ' +20- 00' ) wher e " PRO PRODUCT_I D" = ' 2350 2350'' and and " WARRA ARRAN NTY_ PERI OD" = TO_ YMI NTERV TERVAL( ' +20- 00' ) and ROWI D =' AAAH AAAHTKAABA TKAABAAA AAY9t Y9t AAD AAD' ;
Step 4 End the LogMiner session. SQL> EXECU EXECUTE TE DBM DBMS_ L OGMNR. END_ L OGMNR( ) ;
Example of Mining Without Specifying the List of Redo Log Files Explicitly The previous example explicitly specified the redo log file or files to be mined. However, if you are mining in the same database that generated the redo log files, then you can mine the appropriate list of redo log files by just specifying the t he time (or SCN) range of interest. interest . To mine a set of redo log files without explicitly e xplicitly specifying them, use the DBMS_ L OGMNR. CONTI NUOUS_M US_ MI NE option to the DBMS_ L OGMNR. START_ START_ L OGMNR procedure, and specify either a time range or an SCN range of interest.
Example : Minin g Redo Log Files in a Given Given Time Range This example assumes that you want to use the data dictionary extracted to the redo log files.
Step 1 Determine the timestamp of the redo log file that contains the start of the data dictionary. SQL SQL > SELECT NAM NAME, FI RST_TI ST_ TI ME FROM FROM V$ V$AR ARCH CHI VED_ L OG WHERE SEQ SEQUEN UENCE# CE# = ( SEL ECT MAX( MAX( SEQUENC SEQUENCE# E#) FROM V$ARCH V$ARCHI VED_ VED_ L OG WHERE ERE DI DI CTI ONARY_BE Y_ BEG GI N = ' YES' ) ;
NAME
F I RST_TI FI ST_ TI ME
------------ ------------- ------------ -------
------------ --------
/ usr / or acl e/ dat a/ db1ar ch_1_2 ch_1_20 07_482 _482701534. dbf
10- j an- 2003 12: 01: 34
Step 2 Display all the redo log files that have been generated so far. This step is not required, but is included to demonstrate that the CONTI NUOUS_ MI NE option works as expected, as will be shown in Step 4. SQL SQL > SEL ECT FI F I L ENAME nam name FRO F ROM V$LO V$L OGMNR_L R_ L OGS
71
WHERE ERE LOW LOW_ TI ME > ' 1010- j an- 2003 2003 12: 12: 01: 01: 34' 34' ;
NAME ---------------------------------------------/ usr / or acl e/ dat a/ db1 db1ar ch_1_20 ch_1_207 7_482 _482701534. dbf dbf / usr / or acl e/ dat a/ db1 db1ar ch_1_20 ch_1_208 8_482 _482701534. dbf dbf / usr / or acl e/ dat a/ db1 db1ar ch_1_20 ch_1_209 9_482 _482701534. dbf dbf / usr / or acl e/ dat a/ db1 db1ar ch_1_21 ch_1_210 0_482 _482701534. dbf dbf
Step 3 Start LogMiner. Start LogMiner by specifying the dictionary to use and the COMMI TTED TT ED__ DATA_ ONL Y, PRI PRI NT_PR T_ PRETTY_SQ ETTY_SQL, L, and CONTI NUOUS_ MI NE options. SQL> EXECU EXECUTE TE DBM DBMS_ L OGMNR. START_ START_ L OGMNR( STA STARTTI TTI ME => ' 10- j anan- 2003 003 12 12: 01: 34' 34' , END ENDTI ME => => SYSD SYSDATE, OPTI PT I ONS => DBMS_ L OGMNR. DI CT_ FROM ROM_ REDO REDO_ L OGS + DBMS_ L OGMNR. COMMI TTED_ TT ED_D DATA_ ONLY + DBMS_ L OGMNR. PRI NT_ PRETTY_ PRETTY_ SQL SQL + DBMS_ L OGMNR. CONTI NUOUS_ MI NE) ;
Step 4 Query the V$LOGMNR_LOGS view. This step shows that the DBMS_LOGMNR.START_LOGMNR procedure with the CONTINUOUS_MINE option includes all of the redo log files that have been generated so far, as expected. (Compare the output in this step to the output in Step 2.) SQL SQL > SEL ECT FI FI L ENAME nam name FRO F ROM V$LO V$L OGMNR_L R_ L OGS;
NAME -----------------------------------------------------/ usr / or acl e/ dat a/ db1 db1ar ch_1_20 ch_1_207 7_482 _482701534. dbf dbf / usr / or acl e/ dat a/ db1 db1ar ch_1_20 ch_1_208 8_482 _482701534. dbf dbf / usr / or acl e/ dat a/ db1 db1ar ch_1_20 ch_1_209 9_482 _482701534. dbf dbf / usr / or acl e/ dat a/ db1 db1ar ch_1_21 ch_1_210 0_482 _482701534. dbf dbf
72
Step 5 Query the V$LOGMNR_CONTENTS view. To reduce the number of rows returned by the query, exclude all DML statements done in the s ys or systemschema. (This query specifies a timestamp to exclude transactions that were involved in the dictionary extraction.) Note that all reconstructed reconstr ucted SQL statements returned by the th e query are correctly translated. tra nslated. SQL> SELE SELEC CT USE USER RNAME AS usr , ( XI DUSN | | ' . ' | | XI DSLT | | ' . ' | | XI DSQN) as XI XI D, SQL SQL _ REDO REDO FROM V$LO V$L OGMNR_ NR_ CONTENTS WHERE SEG_ SEG_ OWNER I S NULL NULL OR SEG_ SEG_ OWNER NO NOT I N ( ' SYS' , ' SYSTEM' ) AND AND TI MESTAM ESTAMP > ' 1010- j an- 2003 2003 15: 15: 59: 59: 53' 53' ;
USR
XI D
SQL_ SQL_ REDO REDO
-----------
--------
---------------- ------------------ -
SYS
1. 2. 1594
set set t r ansacti ansacti on r ead ead wr i t e;
SYS SYS
1. 2. 159 1594
cr eat eat e t abl abl e oe. oe. pr odu oduct _t r acki ng ( pr odu oduct _i d number not not nul l , modi odi f i ed_t ed_t i me date, date, ol d_l i st _pr _pr i ce number( 8, 2) , ol d_warr _warr ant ant y_peri y_peri od i nt erval erval year( year( 2) t o mont ont h) ;
SYS
1. 2. 1594 1594
commi t ;
SYS SYS
1. 18. 1602
set t r ansact ansact i on r ead ead wr i t e;
SYS
1. 18. 1602
cr eat eat e or r epl epl ace t r i gger oe. oe. pr odu oduct _t r acki acki ng_t r i gger bef bef ore upd updat at e on oe. oe. produ product _i nf ormat i on f or each each r ow when ( new. l i st_pri ce <> ol d. l i st_pri ce or new. warr ant ant y_per y_per i od <> ol d. warr ant ant y_per y_per i od) od) decl ar e begi egi n i nser ser t i nt o oe oe. pr odu oduct _t r acki ng val val ues ( : ol d. produ roduct_i d, sysd sysda at e, : ol d. l i s t _ pr pr i c e, e, : ol d. war r ant y_ pe per i od) ; end;
73 SYS
1. 18. 18. 1602 1602
commi t ;
OE
1. 9. 1598
updat e " OE" . " PRODUCT_I NFORM FORMATI ON" set " WARRA RRANTY_PERI OD" = TO_ TO_ YMI NTERVA TERVAL( L( ' +08- 00' ) , "LI ST_PR T_PRI CE" = 100 wher e " PRO PRODUCT_I T_ I D" = 1729 1729 and " WARRAN RANTY_PERI OD" = TO_ YMI NTERV TERVAL( ' +05- 00' ) and "LI ST_PRI T_PRI CE" = 80 and and ROWI D = ' AAAH AAAHTKAABA TKAABAAA AAY9yAAA Y9yAAA'' ;
OE
1. 9. 1598 1598
i nser t i nt o " OE". " PRO PRODUCT_TRAC T_TRACKI NG" val val ues " PRO PRODUCT_ CT_ I D" = 1729, 1729, "MO "MODI FI ED_TI ME" = TO_DA _DATE( TE( ' 13- j anan- 200 2003 16: 16: 07: 03' , ' dddd- monon- yyyy yyyy hh24: 24: mi : ss' ) , " OLD_LI LD_LI ST_PR ST_PRII CE" = 80, 80, " OLD_ LD_ WARR ARRANTY_PERI OD" = TO_ TO_ YMI NTERV TERVAL( ' +0505- 00' 00' ) ;
OE
1. 9. 1598
updat e " OE" . " PRODUCT_I NFORM FORMATI ON" set " WARRAN RANTY_PERI OD" = TO_ YMI NTERV TERVAL( ' +08- 00' ) , "LI ST_PRI _PRI CE" = 92 wher e " PRO PRODUCT_I T_ I D" = 2340 2340 and " WARRAN RANTY_PERI OD" = TO_ YMI NTERV TERVAL( ' +05- 00' ) and "LI ST_PRI T_PRI CE" = 72 and and ROWI D = ' AAAH AAAHTKAABA TKAABAAA AAY9z Y9zAA AAA' A' ;
OE
1. 9. 1598 1598
i nser t i nt o " OE". " PRO PRODUCT_TRAC T_TRACKI NG" val val ues " PRO PRODUCT_I T_ I D" = 2340, 2340,
74 "MO "MODI FI ED_TI ME" = TO_DA _DATE( TE( ' 13- j anan- 200 2003 16: 16: 07: 07' , ' dddd- monon- yyyy yyyy hh24: 24: mi : ss' ) , " OLD_LI LD_LI ST_PR ST_PRII CE" = 72, 72, " OLD_ LD_ WARRA RRANTY_PERI OD" = TO_ TO_ YMI NTERVA TERVAL( L( ' +05- 00' 00' ) ;
OE
1. 9. 1598 1598
commi t ;
Step 6 End the LogMiner session. SQL> EXECU EXECUTE TE DBM DBMS_ L OGMNR. END_ L OGMNR( ) ;
75
BACKUP AND RECOVERY Opening Ope ning or Bringing the database database in Archivelog mode.
To open the database in Archive log mode. Follow these steps: STEP 1: Shutdown the database if it is running. STEP 2: Take a full offline backup. STEP 3: Set the following parameters in parameter file.
LOG LOG_ ARC ARCHI VE_ VE_ FOR FORMAT=i ca%s . %t . %r . ar c LOG LOG_ARC _ARCHI VE_DES E_DEST_1= T_1=”l ocat ocat i on= on=/ u02/ u02/ i ca/ ar c1” I f you you want you you can can speci speci f y seco secon nd dest i nat i on al so LOG LOG_ARC _ARCHI VE_DES E_DEST_2= T_2=”l ocat ocat i on= on=/ u02/ u02/ i ca/ ar c1” Step 3: Start and mount the database.
SQL > START ST ARTUP UP MO MOUNT UNT STEP 4: Give the following command
SQL SQL > ALTER ALTER DATABASE DATABASE ARCHI ARCHI VELO VEL OG; STEP 5: Then type the following to confirm.
SQL SQL > ARCH ARCHI VE LOG LOG L I ST; STEP 6: Now open the database SQL>alter database open; Step 7: It is recommended that you take a full backup after you brought the database in archive log mode. To again brin g back the database in NOARCHIV NOARCHIVELOG ELOG mode. Follow these steps:
STEP 1: Shutdown the database if it is running. STEP 2: Comment the following parameters in parameter file by putting " # " .
# LOG LOG_ ARCHI VE_DES E_DEST_1= T_1=”l ocat i on= on=/ u02/ u02/ i ca/ ar c1” # LOG LOG_ ARCHI VE_DES E_DEST_2= T_2=”l ocat i on= on=/ u02/ u02/ i ca/ ar c2” # L OG_ ARC ARCHI VE_FO VE_F ORMAT=i ca%s . %t . %r . ar c STEP 3: Startup and mount the database.
SQL > STARTUP ARTUP MOUNT UNT;
76 STEP 4: Give the following Commands
SQL> ALTE AL TER R DATABASE ATABASE NO NOARCHI ARCHI VELO VEL OG; STEP 5: Shutdown the database and take full offline backup.
TAKING TAK ING OFFLINE OFFLINE BA CKUPS. ( UNIX ) Shutdown the database if it is running. Then start SQL Plus and connect as SYSDBA.
$sql sql pl us SQL> connect connect / as s ysdba SQL SQL > Shut down down i mmedi at e SQL> Exi t After Shutting down the database. Copy all the datafiles, logfiles, controlfiles, parameter file and password file to your backup destination. TIP:
ATAFI LE and V$LOG LOGFI LE bef bef or e shut shut t i ng To identify the datafiles, Logfiles query the data dictionary tables V$DATAFI down. Lets suppose all the files are in "/u01/ i ca" ca" directory. Then the following command copies all the files to the backup destination /u02/backup.
$cd / u01/ i ca $cp * / u02/ u02/ backup backup// Be sure to remember the destination of each file. This will be useful when restoring from this backup. You can create text file and put the destinations of each file for future use. Now you can open the database.
TAKING TAK ING ONLINE (HOT) (HOT) BA CKUPS.(UN CKUPS.(UNIX) IX) To take online backups the database should be running in Archivelog mode. To check whether the database is running in Archivelog mode or Noarchivelog mode. Start sqlplus and then connect as SYSDBA. SYSDBA. After connecting give the command "archive log list" this will show you the status of archiving.
$sql sql pl us Ent er User ser : / as sysdb sysdba SQL SQL > ARCH ARCHI VE LOG LOG L I ST If the database is running in archive log mode then you can take online backups. Let us suppose we want to take online backup of "USERS" tablespace. You can query the V$DATAFILE view view to find out the name of datafiles associated with this tablespace. Lets suppose the file is
77
"/ u01/ i ca/ ca/ usr 1. dbf ". Give the following series of commands to take online backup of USERS tablespace.
$sql sql pl us Ent er User ser : / as sysdb sysdba SQL> al t er t abl abl espace espace user user s begi begi n back backup up;; SQL> host cp / u01/ i ca/ ca/ usr 1. dbf
/ u02/ backup ckup
SQL> al t er t abl abl espace espace user user s end backup backup;; SQL> exi t ;
RECOVERING THE DATABASE IF IT IS RUNNING IN NOARCHIVELOG MODE. Option 1: When you don’t have a backup.
If you have lost one datafile and if you don't have any backup and if the datafile does not contain important objects then, you can drop the damaged datafile and open the database. You will loose all information contained in the damaged datafile. The following are the steps to drop a damaged datafile and open the database. (UNIX) STEP 1: First take full backup of database for safety. STEP 2: Start the sqlplus and give the following commands.
$sql sql pl us Ent er User ser : / as sysdb sysdba SQL > START ST ARTUP UP MO MOUNT UNT SQL> ALTER LTER DATAB TABASE DATAFI TAFI LE
' / u01 u01/ i ca/ usr 1. dbf dbf '
of f l i ne dr op; op;
SQL>al t er dat dat abase abase open open;; Option 2:When you have the Backup.
If the database is running in Noarchivelog mode and if you have a full backup. Then there are two options for you. i . Either you can drop the damaged datafile, if it does not contain important information which you can afford to loose. ii . Or you can restore from full backup. You will loose all the changes made to the database since last full backup. To drop the damaged datafile follow the steps shown previously. To restore from full database backup. Do the following.
78 STEP 1: Take a full backup of current database. STEP 2: Restore from full database backup i.e. copy all the files from backup to their original locations. (UNIX) Suppose the backup is in "/u2/oracle/backup" directory. Then do the following.
$cp / u02/ backup ckup/ *
/ u01/ i ca
This will copy all the files from backup directory to original destination. Also remember to copy the control files to all the mirrored locations.
RECOVERING FROM LOST OF CONTROL FILE. If you have lost the control file and if it is mirrored. Then simply copy the control file from mirrored location to the damaged location and open the database If you have lost all the mirrored control files and all the datafiles and logfiles are intact. Then you can recreate a control file. If you have already ta ken the backup of control file creation statement by giving this command. " ALTER DATABASE BACKUP CONTROLFILE TO TRACE; " and if you have not added any tablespace since then, just create the controlfile by executing the statement Buf If you have added any new tablespace after generating create controlfile statement. Then you have to alter the script and include the filename and size of the file in script file. If your script file containing the control file creation statement is "CR.SQL" Then just do the following. STEP 1: Start sqlplus STEP 2: connect / as sysdba STEP 3: Start and do not mount a database like this.
SQL > STARTUP ARTUP NOMOUNT STEP 4: Run the "CR.SQL" script file. STEP 5: Mount and Open the database. SQL>alter database mount; SQL>alter database open; If you do not have a backup of Control file creation statement. Then you have to manually give the CREATE CONTROL FILE statement. You have to write the file names and sizes of all the datafiles. You will lose any datafiles which you do not include. Refer to "Managing Control File" topic for the CREATE CONTROL FILE statement.
79
Recoveri Re coveri ng Da Database tabase when th e database is r unni ng i n A RC RCHIV HIVELOG ELOG Mode. Mode. Recovering from the lost of Damaged Datafile.
If you have lost one datafile. Then follow the steps shown below. STEP 1. Shutdown the Database if it is running. STEP 2. Restore the datafile from most recent backup. STEP 3. Then Start sqlplus and connect as SYSDBA.
$sql sql pl us Ent er User ser : / as sysdb sysdba SQL>St L>St ar t up mount ; SQL>Set Set aut aut or ecover y on; SQL>al t er dat abase abase r ecover ecover ; If all archive log files are available then recovery should go on smoothly. After you get the "Media Recovery Completely" statement. Go on to next step. STEP 4. Now open the database SQL>alter database open; Recovering from the Lost Archived Files: If you have lost the archived files. Then Immediately shutdown the database and take a full offline backup.
Time Based Recovery (INCOMPLETE RECOVERY). Suppose a user has a dropped a crucial table accidentally and you have to recover the dropped table. You have taken a full backup of the database on Monday 13-Aug-2007 and the table was created on Tuesday 14 Aug-2007 and thousands of rows were inserted into it. Some user accidently drop the table on Thursday 16-Aug-2007 and nobody notice this until Saturday. Now to recover the table follow these steps. STEP 1. Shutdown the database and take a full offline backup. STEP 2. Restore all the datafiles, logfiles and control file from the full offline backup which was taken on Monday. STEP 3. Start SQLPLUS and start and mount the database. STEP 4. Then give the following command to recover database until specified time.
SQL> r eco ecover ver dat abase unt unt i l t i me ' 2007: 08: 16: 13: 55: 00' usi ng backu ckup con cont r ol f i l e;
80 STEP 5. Open the database and reset the logs. Because you have performed a Incomplete Recovery, like this
SQL> al t er dat dat abase abase op open r eset eset l ogs; ogs; STEP 6. After database is open. Export the table to a dump file using Export Utility. STEP 7. Restore from the full database backup which you have taken on Saturday. STEP 8. Open the database and Import the table.