Importing oracle database export file
My intent is to import in an existing DB, as opposed to setting up a new DB instance just to accommodate the full export I have been given. You can also use it to perform a network import to load a target database directly from a source database with no intervening files.
Because of the overhead involved in validating data, the default is that data is no longer validated on import. The Oracle Data Pump Import utility is used to load an export dump file set into a target database. I have a comma delimited flat file and a table in the database. How do i find the diff. Use this option when importing a dump file from an untrusted source to prevent issues that can occur because data is corrupt in the dump file.
During this Oracle Database 12c new features article series, I shall be extensively exploring some of the very important new additions and enhancements introduced in the area of Database Administration, RMAN, High Availability and Performance Tuning.
Oracle Data Pump is a newer, faster and more flexible alternative to the "exp" and "imp" utilities used in previous Oracle versions. Specifies what is done with the database optimizer statistics at import time. Import database optimizer statistics only if they are not questionable.
If they are questionable, recalculate the optimizer statistics. Do not import the database optimizer statistics. Instead, recalculate them on import. Oracle Database Concepts for more information about the optimizer and the statistics it uses. Specifies whether or not to import any general Streams metadata that may be present in the export dump file.
Specifies whether or not to import Streams instantiation metadata that may be present in the export dump file. Specify y if the import is part of an instantiation in a Streams environment.
Specifies that the import is a table-mode import and lists the table names and partition and subpartition names to import. Table-mode import lets you import entire partitioned or nonpartitioned tables. If a table in the list is partitioned and you do not specify a partition name, all its partitions and subpartitions are imported.
All the tables whose names match all the specified patterns of a specific table name in the list are selected for import. A table name in the list that consists of all pattern matching characters and no partition name results in all exported tables being imported.
As the export file is processed, each table name in the export file is compared against each table name in the list, in the order in which the table names were specified in the parameter.
To avoid ambiguity and excessive processing time, specific table names should appear at the beginning of the list, and more general table names those with patterns should appear at the end of the list. Although you can qualify table names with schema names as in scott. The following are examples of how case-sensitivity can be preserved in the different Import modes. Similarly, in the parameter file, if a table name includes a pound sign, the Import utility interprets the rest of the line as a comment, unless the table name is enclosed in quotation marks.
F or example, if the parameter file contains the following line, Import interprets everything on the line after emp as a comment and does not import the tables dept and mydata:. However, given the following line, the Import utility imports all three tables because emp is enclosed in quotation marks:. You must use escape characters to get such characters in the name past the shell and into Import.
If there is more than one tablespace in the export file, you must specify all of them as part of the import operation. When you import a table that references a type, but a type of that name already exists in the database, Import attempts to verify that the preexisting type is, in fact, the type used by the table rather than a different type that just happens to have the same name.
To do this, Import compares the type's unique identifier TOID with the identifier stored in the export file. Import will not import the table rows if the TOIDs do not match.
In some situations, you may not want this validation to occur on specified types for example, if the types were created by a cartridge installation. If you do not specify a schema name for the type, it defaults to the schema of the importing user.
For example, in the first preceding example, the type typ1 defaults to scott. It has no effect on table types. The output of a typical import with excluded types would contain entries similar to the following:. Specifies a list of user names whose schemas will be targets for Import.
The user names must exist prior to the import operation; otherwise an error is returned. If multiple schemas are specified, the schema names are paired.
The following example imports scott' s objects into joe 's schema, and fred 's objects into ted' s schema:. When specified as y , instructs Import to import transportable tablespace metadata from an export file. In each example, you are shown how to use both the command-line method and the parameter file method. Some examples use vertical ellipses to indicate sections of example output that were too long to include.
In this example, an entire database is exported to the file dba. Information is displayed about the release of Export you are using and the release of Oracle Database that you are connected to. Status messages are written out as the entire database is exported. A final completion message is returned when the export completes successfully, without warnings.
In this example, user scott is exporting his own tables. Then, status messages similar to the following are shown:.
In table mode, you can export table data or the table definitions. If schemaname is not specified, Export defaults to the previous schema name from which an object was exported.
If there is not a previous object, Export defaults to the exporter's schema. A nonprivileged user can export only dependent objects for the specified tables that the user owns.
Exports in table mode do not include cluster definitions. As a result, the data is exported as unclustered tables. Thus, you can use table mode to uncluster tables. In this example, pattern matching is used to export various tables for users scott and blake. In partition-level Export, you can specify the partitions and subpartitions of a table that you want to export.
Assume emp is a table that is partitioned on employee name. There are two partitions, m and z. As this example shows, if you export the table without specifying a partition, all of the partitions are exported. As this example shows, if you export the table and specify a partition, only the specified partition is exported.
Assume emp is a partitioned table with two partitions, m and z. Table emp is partitioned using the composite method. Partition m has subpartitions sp1 and sp2, and partition z has subpartitions sp3 and sp4. As the example shows, if you export the composite partition m, all its subpartitions sp1 and sp2 will be exported. If you export the table and specify a subpartition sp4 , only the specified subpartition is exported. This section gives some examples of import sessions that show you how to use the parameter file and command-line methods.
The examples illustrate the following scenarios:. In this example, using a full database export file, an administrator imports the dept and emp tables into the scott schema. Information is displayed about the release of Import you are using and the release of Oracle Database that you are connected to. This example illustrates importing the unit and manager tables from a file exported by blake into the scott schema.
In this example, a database administrator DBA imports all tables belonging to scott into user blake' s account. This section describes an import of a table with multiple partitions, a table with partitions and subpartitions, and repartitioning a table on different columns. In this example, emp is a partitioned table with three partitions: P1 , P2 , and P3. In a partition-level Import you can specify the specific partitions of an exported table that you want to import.
In this example, these are P1 and P3 of table emp:. This example demonstrates that the partitions and subpartitions of a composite partitioned table are imported. This example assumes the emp table has two partitions based on the empno column. This example repartitions the emp table on the deptno column. The Export and Import utilities are the only method that Oracle supports for moving an existing Oracle database from one hardware platform to another.
You will need this information later in the process. Move the dump file to the target database server. Before importing the dump file, you must first create your tablespaces, using the information obtained in Step 1. Otherwise, the import will create the corresponding datafiles in the same file structure as at the source database, which may not be compatible with the file structure on the target system. This section describes the different types of messages issued by Export and Import and how to save them in a log file.
You can capture all Export and Import messages in a log file, either by using the LOG parameter or, for those systems that permit it, by redirecting the output to a file. A log of detailed information is written about successful unloads and loads and any errors that may have occurred. Export and Import do not terminate after recoverable errors. For example, if an error occurs while exporting a table, Export displays or logs an error message, skips to the next table, and continues processing.
These recoverable errors are known as warning s. For example, if a nonexistent table is specified as part of a table-mode Export, the Export utility exports all other tables. Then it issues a warning and terminates successfully. Some errors are nonrecoverable and terminate the Export or Import session. These errors typically occur because of an internal problem or because a resource, such as memory, is not available or has been exhausted.
For example, if the catexp. When an export or import completes without errors, a message to that effect is displayed, for example:. If one or more recoverable errors occurs but the job continues to completion, a message similar to the following is displayed:.
If a nonrecoverable error occurs, the job terminates immediately and displays a message stating so, for example:. Export and Import provide the results of an operation immediately upon completion. Depending on the platform, the outcome may be reported in a process exit code and the results recorded in the log file.
This enables you to check the outcome from the command line or script. Table shows the exit codes that get returned for various results. Table Exit Codes for Export and Import. This section describes factors to take into account when using Export and Import across a network. Because the export file is in binary format, use a protocol that supports binary transfers to prevent corruption of the file when you transfer it across a network. For example, use FTP or a similar file transfer protocol to transmit the file in binary mode.
Transmitting export files in character mode causes errors when the file is imported. With Oracle Net, you can perform exports and imports over a network. For example, if you run Export locally, you can write data from a remote Oracle database into a local export file.
If you run Import locally, you can read data into a remote Oracle database. For the exact syntax of this clause, see the user's guide for your Oracle Net protocol.
The following sections describe the globalization support behavior of Export and Import with respect to character set conversion of user data and data definition language DDL. The Export utility always exports user data, including Unicode data, in the character sets of the Export server.
Character sets are specified at database creation. If the character sets of the source database are different than the character sets of the import database, a single conversion is performed to automatically convert the data to the character sets of the Import server.
If the export character set has a different sorting order than the import character set, then tables that are partitioned on character columns may yield unpredictable results.
For example, consider the following table definition, which is produced on a database having an ASCII character set:. To obtain the desired results, the owner of partlist must repartition the table following the import. If the export file's character set is different than the import user session character set, then Import converts the character set to its user session character set. Import can only perform this conversion for single-byte character sets.
This means that for multibyte character sets, the import file's character set must be identical to the export file's character set. A final character set conversion may be performed if the target database's character set is different from the character set used by the import user session.
To minimize data loss due to character set conversions, ensure that the export database, the export user session, the import user session, and the import database all use the same character set.
Some 8-bit characters can be lost that is, converted to 7-bit equivalents when you import an 8-bit character set export file. Most often, this is apparent when accented characters lose the accent mark. During character set conversion, any characters in the export file that have no equivalent in the target character set are replaced with a default character.
The default character is defined by the target character set. The three interrelated objects in a snapshot system are the master table, optional snapshot log, and the snapshot itself. The tables master table, snapshot log table definition, and snapshot tables can be exported independently of one another. Snapshot logs can be exported only if you export the associated master table. You can export snapshots using full database or user-mode export; you cannot use table-mode export.
The snapshot log in a dump file is imported if the master table already exists for the database to which you are importing and it has a snapshot log. As a result, each ROWID snapshot's first attempt to do a fast refresh fails, generating an error indicating that a complete refresh is required. After you have done a complete refresh, subsequent fast refreshes will work properly.
In contrast, when a primary key snapshot log is exported, the values of the primary keys do retain their meaning upon import. Therefore, primary key snapshots can do a fast refresh after the import. A snapshot that has been restored from an export file has reverted to a previous state. On import, the time of the last refresh is imported as part of the snapshot table definition. The function that calculates the next refresh time is also imported.
Each refresh leaves a signature. A fast refresh uses the log entries that date from the time of that signature to bring the snapshot up to date. When the fast refresh is complete, the signature is deleted and a new signature is created. Any log entries that are not needed to refresh other snapshots are also deleted all log entries with times before the earliest remaining signature.
When you restore a snapshot from an export file, you may encounter a problem under certain circumstances.
Assume that a snapshot is refreshed at time A, exported at time B, and refreshed again at time C. Then, because of corruption or other problems, the snapshot needs to be restored by dropping the snapshot and importing it again. The newly imported version has the last refresh time recorded as time A. However, log entries needed for a fast refresh may no longer exist. If the log entries do exist because they are needed for another snapshot that has yet to be refreshed , they are used, and the fast refresh completes successfully.
Otherwise, the fast refresh fails, generating an error that says a complete refresh is required. Snapshots and related items are exported with the schema name explicitly given in the DDL statements. This does not apply to snapshot logs, which cannot be imported into a different schema. The transportable tablespace feature enables you to move a set of tablespaces from one Oracle database to another.
To move or copy a set of tablespaces, you must make the tablespaces read-only, copy the datafiles of these tablespaces, and use Export and Import to move the database information metadata stored in the data dictionary. Both the datafiles and the metadata export file must be copied to the target database. The transport of these files can be done using any facility for copying flat binary files, such as the operating system copying facility, binary-mode FTP, or publishing on CD-ROMs.
Export and Import provide the following parameters to enable movement of transportable tablespace metadata. Oracle Database Administrator's Guide for details about managing transportable tablespaces.
Oracle Database Concepts for an introduction to transportable tablespaces. Read-only tablespaces can be exported. If you want read-only functionality, you must manually make the tablespace read-only after the import.
You can drop a tablespace by redefining the objects to use different tablespaces before the import. In many cases, you can drop a tablespace by doing a full database export, then creating a zero-block tablespace with the same name before logging off as the tablespace you want to drop. All objects from that tablespace will be imported into their owner's default tablespace with the exception of partitioned tables, type tables, and tables that contain LOB or VARRAY columns or index-only tables with overflow segments.
Import cannot determine which tablespace caused the error. Objects are not imported into the default tablespace if the tablespace does not exist, or you do not have the necessary quotas for your default tablespace. If a user's quota allows it, the user's tables are imported into the same tablespace from which they were exported. However, if the tablespace no longer exists or the user does not have the necessary quota, the system uses the default tablespace for that user as long as the table is unpartitioned, contains no LOB or VARRAY columns, is not a type table, and is not an index-only table with an overflow segment.
This scenario can be used to move a user's tables from one tablespace to another. For example, you need to move joe 's tables from tablespace A to tablespace B after a full database export. Follow these steps:. Set joe 's quota on tablespace A to zero. Also revoke all roles that might have such privileges or quotas. When you revoke a role, it does not have a cascade effect. Therefore, users who were granted other roles by joe will be unaffected.
Give joe a quota on tablespace B and make it the default tablespace for joe. Import joe 's tables. By default, Import puts joe 's tables into tablespace B. You can export and import tables with fine-grained access control policies enabled. When doing so, consider the following:. If a user without the correct privileges attempts to export a table with fine-grained access policies enabled, only those rows that the user has privileges to read will be exported.
If a user without the correct privileges attempts to import from an export file that contains tables with fine-grained access control policies, a warning message will be issued. Therefore, it is advisable for security reasons that the exporter and importer of such tables be the DBA.
If fine-grained access control is enabled on a SELECT statement, then conventional path Export may not export the entire table, because fine-grained access may rewrite the query.
You can use instance affinity to associate jobs with instances in databases you plan to export and import. Be aware that there may be some compatibility issues if you are using a combination of releases. Oracle Database Reference. Oracle Database Upgrade Guide. A database with many noncontiguous, small blocks of free space is said to be fragmented.
A fragmented database should be reorganized to make space available in contiguous, larger blocks. You can reduce fragmentation by performing a full database export and import as follows:. Delete the database. See your Oracle operating system-specific documentation for information about how to delete a database. If the tablespace no longer exists, or the user does not have sufficient quota in the tablespace, the system uses the default tablespace for that user, unless the table:.
If the user does not have sufficient quota in the default tablespace, the user's tables are not imported. See Reorganizing Tablespaces to see how you can use this to your advantage. Tables are exported with their current storage parameters. If you alter the storage parameters of existing tables prior to export, the tables are exported using those altered storage parameters.
Note that LOB data might not reside in the same tablespace as the containing table. If LOB data resides in a tablespace that does not exist at the time of import, or the user does not have the necessary quota in that tablespace, the table will not be imported. Because there can be multiple tablespace clauses, including one for the table, Import cannot determine which tablespace clause caused the error.
Before using the Import utility to import data, you may want to create large tables with different storage parameters. By default at export time, storage parameters are adjusted to consolidate all data into its initial extent. The material presented in this section is specific to the original Export utility.
The following topics are discussed:. Data is read from disk into a buffer cache, and rows are transferred to the evaluating buffer. The data, after passing expression evaluation, is transferred to the Export client, which then writes the data into the export file. Direct path Export is much faster than conventional path Export because data is read from disk into the buffer cache and rows are transferred directly to the Export client.
The evaluating buffer that is, the SQL command-processing layer is bypassed. The data is already in the format that Export expects, thus avoiding unnecessary data conversion. The data is transferred to the Export client, which then writes the data into the export file. The rest of this section discusses the following topics:.
ORA snapshot too old; rollback segment number string with name " string " too small. The following users are exempt from Virtual Private Database and Oracle Label Security enforcement regardless of the export mode, application, or utility used to extract data from the database:. This is a powerful privilege and should be carefully managed. Your exact performance gain depends upon the following factors:. An export file that is created using direct path Export will take the same amount of time to import as an export file created using conventional path Export.
To invoke a direct path Export, you must use either the command-line method or a parameter file. You cannot invoke a direct path Export using the interactive method. To extract metadata from a source database, Export uses queries that contain ordering clauses sort operations.
For these queries to succeed, the user performing the export must be able to allocate sort segments. For these sort segments to be allocated in a read-only database, the user's temporary tablespace should be set to point at a temporary, locally managed tablespace. The following sections describe points you should consider when you export particular database objects. If transactions continue to access sequence numbers during an export, sequence numbers might be skipped.
The best way to ensure that sequence numbers are not skipped is to ensure that the sequences are not accessed during the export. Sequence numbers can be skipped only when cached sequence numbers are in use.
When a cache of sequence numbers has been allocated, they are available for use in the current database. The exported value is the next sequence number after the cached values. Sequence numbers that are cached, but unused, are lost when the sequence is imported. On export, LONG datatypes are fetched in sections. However, enough memory must be available to hold all of the contents of each row, including the LONG data. LONG columns can be up to 2 gigabytes in length. All data in a LOB column does not need to be held in memory at the same time.
LOB data is loaded and unloaded in sections. The contents of foreign function libraries are not included in the export file. Instead, only the library specification name, location is included in full database mode and user-mode export. The purpose of this post is to briefly describe how to list objects in an export file without physically importing the data. Here are two simple ways to minimize the occurrence of import errors to perform a dummy import.
One way uses the SHOW parameter and the other way creates objects separately from a. A particular datafile can be in fixed record format, variable record format, or stream record format the default. The log file contains a detailed summary of the load, including a description of any errors that occurred during the load.
The discard file contains records that were filtered out of the load because they did not match any reco rd-selection criteria specified in the control file. Conventional Path. A conventional path load is the default loading method. This method can sometimes be slower than other methods because extra overhead is added as SQL statements are generated, passed to Oracle, and executed.
Direct Path. A direct path load does not compete with other users for database resources. It eliminates much of the Oracle database overhead by formatting Oracle data blocks and writing them directly to the database files, bypassing much of the data processing that normally takes place. Therefore, a direct path load can usually load data faster than conventional path. However, there are several restrictions on direct path loads that may require you to use a conventional path load.
For example, direct path load cannot be used on clustered tables or on tables for which there are transactions pending. See Oracle Database Utilities for a complete discussion of situations in which direct path load should and should not be used.
External Tables. An external table load creates an external table for data that is contained in a datafile. See Oracle Database Administrator's Guide for more information on external tables. Load data across a network. In the following example, a new table named dependents will be created in the HR sample schema.
It will contain information about dependents of employees listed in the employees table of the HR schema. Create the data file, dependents. You can create this file using a variety of methods, such as a spreadsheet application or by simply typing it into a text editor. It should have the following content:. This file is a CSV comma-separated values file in which the commas act as delimiters between the fields.
The field containing the first name is enclosed in double quotation marks in cases where a variant of the official name is also provided—that is, where the first name field contains a comma. You can create this file with any text editor. On Linux, ensure that environment variables are set according to the instructions in "Setting Environment Variables on the Linux Platform". The benefits column has a datatype of CLOB so that it can hold large blocks of character data.
In this example, there is not yet any benefits information available so the column is shown as NULL in the data file, dependents. The data in the dependents. Information about the load is written to the log file, dependents.
The content of the log file looks similar to the following:. Oracle Database XE provides the following command-line utilities for exporting and importing data:.
The following sections provide an overview of each utility. For a summary of when you might want to use each utility, see Table The Data Pump Export utility exports data and metadata into a set of operating system files called a dump file set. The Data Pump Import utility imports an export dump file set into a target Oracle database. A dump file set is made up of one or more disk files that contain table data, database object metadata, and control information.
The files are written in a proprietary, binary format, which means that the dump file set can be imported only by the Data Pump Import utility. The dump file set can be imported to the same database or it can be moved to another system and loaded into the Oracle database there.
0コメント