1. 程式人生 > >記一次impdp匯入資料時的ORA-31696錯誤

記一次impdp匯入資料時的ORA-31696錯誤

今天幫一同事從一個dump檔案中匯入資料到測試庫中,結果死活出現ORA-31696的錯誤:

[[email protected]]$ impdp pebank/pebank directory=dumpdir dumpfile=mcj123.1011.dmp remap_schema=ebank:pebank table_exists_action=append

Import: Release 10.2.0.4.0 - 64bit Production on Tuesday, 16 October, 2012 14:47:23

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production

With the Partitioning, Real Application Clusters, OLAP, Data Mining

and Real Application Testing options

Master table "PEBANK"."SYS_IMPORT_FULL_01" successfully loaded/unloaded

Starting "PEBANK"."SYS_IMPORT_FULL_01":  pebank/******** directory=dumpdir dumpfile=mcj123.1011.dmp remap_schema=ebank:pebank table_exists_action=append

Processing object type TABLE_EXPORT/TABLE/TABLE

ORA-39152: Table "PEBANK"."MCJNL" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append

ORA-39152: Table "PEBANK"."MCJNLDATA" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append

ORA-39152: Table "PEBANK"."MCJNLQUERYLOG" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

ORA-31696: unable to export/import TABLE_DATA:"PEBANK"."MCJNLQUERYLOG" using client specified AUTOMATIC method

ORA-31696: unable to export/import TABLE_DATA:"PEBANK"."MCJNLDATA" using client specified AUTOMATIC method

ORA-31693: Table data object "PEBANK"."MCJNL":"MCJNL_2011_X" failed to load/unload and is being skipped due to error:

ORA-00001: unique constraint (PEBANK.PK_MCJNL1) violated

. . imported "PEBANK"."MCJNL":"MCJNL_2009_10"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2009_11"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2009_12"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_01"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_02"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_03"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_04"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_05"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_06"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_07"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_08"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_09"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_10"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_11"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2010_12"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2011_01"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2011_02"                0 KB       0 rows

. . imported "PEBANK"."MCJNL":"MCJNL_2011_03"                0 KB       0 rows

Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX

Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT

Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS

Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT

Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

Job "PEBANK"."SYS_IMPORT_FULL_01" completed with 6 error(s) at 14:51:44

發現其中兩個表無法匯入,提示ORA-31696的錯誤,無法使用客戶端的“自動選擇”方法匯出/匯入表。

查了一下發現這是Oracle的一個Bug:Bug 4239903 : IMPDP FAILED IF LONG DATATYPE IS THERE INTHE TABLE,因為這兩個表中均存在LONG型別,且我使用了table_exusts_action選項。

這個Bug是指在版本10.1.0.2 to 10.2.0.4中,使用impdp匯入帶有LONG型別的表時,如果目標庫中已存在該表而使用table_exists_action=append時,會出現ORA-31696的錯誤。

參考一下官方文件,可知,在版本10.1.0.2to 10.2.0.4中,如果使用impdp匯入帶有LONG型別的表時,目標庫中如果已存在該表,則需要需要使用table_exists_action=replace選項,也不能使用先content=metadata_only再content=data_only的方式兩者需要同時進行,或者可以先刪除或者禁用表上的約束,或者您可以使用原始的exp/imp工具替代expdp/impdp工具,或者您也可以把資料庫升級到10.2.0.5來Fixed這個Bug:


DataPump Import (IMPDP) Fails For Table With Column Datatype LONG With Error ORA-31696 [ID 305819.1]

轉到底部

修改時間: 2012-3-9 型別:PROBLEM 狀態:PUBLISHED優先順序:3

註釋 (0)

Appliesto:

Oracle Server - Enterprise Edition - Version:10.1.0.2 to 10.2.0.4 - Release: 10.1 to 10.2
Information in this document applies to any platform.

Symptoms

DataPump importfails with error ORA-31696 while loading data into pre-existing table, if thereis a LONG column in that table. This is demonstrated by the following example:

connect / as sysdba

create user test identified by testdefault tablespace users temporary tablespace temp;
grant connect, resource to test;

create or replace directory tmp as '/tmp';
grant read, write on directory tmp totest;

connect test/test

-- create table with LONG column
create table a_tab
(
   id    number,
   text_v varchar2(10),
   text_l long
);
alter table a_tab add constrainta_tab_pk primary key (id);

-- populate the table
begin
  for i in 1..10 loop
    insert into a_tabvalues (i, 'Text '||lpad (to_char (i), 5, '0'), 'Text LONG '||lpad (to_char(i), 990, '0'));
  end loop;
  commit;
end;
/

set long 1000

select * from a_tab;

ID        TEXT_V
---------- ----------
TEXT_L
--------------------------------------------------------------------------------
        1 Text 00001
Text LONG0000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000
....
0000000000000000000000000000000000000001

        2 Text 00002
Text LONG0000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000002
....

10 rows selected.


Export the table with:

#> expdp test/test directory=tmp dumpfile=a_tab.dmpcontent=data_only tables=a_tab logfile=expdp_a_tab.log


Then:

truncate table a_tab;


and import the data with:

#> impdp test/test directory=tmp dumpfile=a_tab.dmp full=ytable_exists_action=append logfile=impdp_a_tab.log


This fails with error:

Import: Release10.2.0.1.0 - 64bit Production on Friday, 09 March, 2012 9:58:40

Copyright (c) 2003, 2005, Oracle. All rightsreserved.

Connected to: Oracle Database 10g EnterpriseEdition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Miningoptions
Master table"TEST"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "TEST"."SYS_IMPORT_FULL_01":test/******** directory=tmp dumpfile=a_tab.dmp full=ytable_exists_action=append logfile=impdp_a_tab.log
Processing object typeTABLE_EXPORT/TABLE/TABLE_DATA
ORA-31696: unable to export/importTABLE_DATA:"TEST"."A_TAB" using client specified AUTOMATICmethod
Job"TEST"."SYS_IMPORT_FULL_01" completed with 1 error(s) at09:58:43

Cause

The following restrictions exist regarding dataload into pre-existing table:

- cannot use external table mode if there is a LONG column
- cannot use direct path load mode if an enabled constraint other than tablecheck constraint is present on pre-existing table

Due to these restrictions, the procedure KUPD$DATA_INT.SELECT_MODE returns'load_nopossible', and DataPump import fails with the error message.

Solution

Please choose oneof the following options:

1. Import both metadata and data at once, if the table has a LONG column and anenabled constraint.

Or:

2. First disable (or drop) the constraints on existing table and then start theimport.

Or:

3. Use the original export/import (exp/imp) to transfer the table from sourceto target.

Or:

4. Beginning with version 10.2.0.5, importing data in a pre-existing table withLONG column is possible. The same test above returns during import:

Import: Release 10.2.0.5.0- 64bit Production on Friday, 09 March, 2012 9:57:07

Copyright (c) 2003, 2007, Oracle. All rightsreserved.

Connected to: Oracle Database 10g EnterpriseEdition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and RealApplication Testing options
Master table"TEST"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting"TEST"."SYS_IMPORT_FULL_01": test/******** directory=tmpdumpfile=a_tab.dmp full=y table_exists_action=append logfile=impdp_a_tab.log
Processing object typeTABLE_EXPORT/TABLE/TABLE_DATA
. . imported"TEST"."A_TAB"         15.46 KB          10 rows
Job"TEST"."SYS_IMPORT_FULL_01" successfully completed at09:57:12

Import: Release11.1.0.7.0 - 64bit Production on Friday, 09 March, 2012 9:49:23

Copyright (c) 2003, 2007, Oracle. All rightsreserved.

Connected to: Oracle Database 11g EnterpriseEdition Release 11.1.0.7.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and RealApplication Testing options
Master table "TEST"."SYS_IMPORT_FULL_01"successfully loaded/unloaded
Starting"TEST"."SYS_IMPORT_FULL_01": test/******** directory=tmpdumpfile=a_tab.dmp full=y table_exists_action=append logfile=impdp_a_tab.log
Processing object typeTABLE_EXPORT/TABLE/TABLE_DATA
. . imported "TEST"."A_TAB"         15.75 KB          10 rows
Job"TEST"."SYS_IMPORT_FULL_01" successfully completed at09:49:34

Import: Release11.2.0.3.0 - Production on Fri Mar 9 09:45:23 2012

Copyright (c) 1982, 2011, Oracle and/or itsaffiliates. All rights reserved.

Connected to: Oracle Database 11g EnterpriseEdition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and RealApplication Testing options
Master table"TEST"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "TEST"."SYS_IMPORT_FULL_01":test/******** directory=tmp dumpfile=a_tab.dmp full=ytable_exists_action=append logfile=impdp_a_tab.log
Processing object typeTABLE_EXPORT/TABLE/TABLE_DATA
. . imported"TEST"."A_TAB"         15.74 KB          10 rows
Job "TEST"."SYS_IMPORT_FULL_01"successfully completed at 09:45:31

References

BUG:4239903 - IMPDP FAILED IF LONG DATATYPE ISTHERE IN THE TABLE

擴充套件閱讀一下:

Export/Import DataPump Parameter ACCESS_METHOD - How toEnforce a Method of Loading and Unloading Data ? [ID 552424.1]

轉到底部

修改時間:2011-8-26型別:HOWTO狀態:PUBLISHED優先順序:3

註釋 (0)

Appliesto:

Oracle Server - Enterprise Edition - Version:10.1.0.2 to 11.2.0.2 - Release: 10.1 to 11.2
Oracle Server - Personal Edition - Version:10.1.0.2 to 11.2.0.2   [Release: 10.1 to 11.2]
Oracle Server - Standard Edition - Version:10.1.0.2 to 11.2.0.2   [Release: 10.1 to 11.2]
Enterprise Manager for RDBMS - Version:10.1.0.2 to 11.2.0.2   [Release: 10.1 to 11.2]
Oracle Server - Enterprise Edition - Version:10.1.0.2 to 11.2.0.2   [Release: 10.1 to 11.2]
Information in this document applies to any platform.
***Checked for relevance on 7-Feb-2011***

Goal

Starting with Oracle10g, Oracle Data Pump can beused to move data in and out of a database. Data Pump can make use of differentmethods to move the data, and will automatically choose the fastest method. Itis possible though, to manually enforce a specific method. This documentdemonstrates how to specify the method with which data will be loaded orunloaded with Data Pump.

Solution

1. Introduction.

Data Pump can use four mechanisms to move data in and out of adatabase:

  • Data file copying;
  • Direct path;
  • External tables;
  • Network link import.

The two most commonly used methods to move data in and out ofdatabases with Data Pump are the "Direct Path" method and the"External Tables" method.

1.1. Direct Path mode. 
After data file copying, direct path is the fastest method of moving data. Inthis method, the SQL layer of the database is bypassed and rows are moved toand from the dump file with only minimal interpretation. Data Pumpautomatically uses the direct path method for loading and unloading data whenthe structure of a table allows it.

1.2. External Tables mode. 
If data cannot be moved in direct path mode, or if there is a situation whereparallel SQL can be used to speed up the data move even more, then the externaltables mode is used. The external table mechanism creates an external tablethat maps the dump file data for the database table. The SQL engine is thenused to move the data. If possible, the APPEND hint is used on import to speedthe copying of the data into the database. 
Note: When the Export NETWORK_LINK parameter is used to specify a network linkfor an export operation, a variant of the external tables method is used. Inthis case, data is selected from across the specified network link and insertedinto the dump file using an external table.

1.3. Data File Copying mode.
This mode is used when a transport tablespace job is started, i.e.: theTRANSPORT_TABLESPACES parameter is specified for an Export Data Pump job. Thisis the fastest method of moving data because the data is not interpreted noraltered during the job, and Export Data Pump is used to unload only structuralinformation (metadata) into the dump file.

1.4. Network Link Import mode. 
This mode is used when the NETWORK_LINK parameter is specified during an ImportData Pump job. This is the slowest of the four access methods because thismethod makes use of an INSERT SELECT statement to move the data over a databaselink, and reading over a network is generally slower than reading from a disk.

The "Data File Copying" and "Network LinkImport" methods to move data in and out of databases are outside the scopeof this article, and therefore not discussed any further.

For details about the access methods of the classic export client(exp), see:
Note:155477.1 "Parameter DIRECT: ConventionalPath Export Versus Direct Path Export"

2. Export Data Pump: unloading data in "Direct Path"mode.

Export Data Pump will use the "Direct Path" mode tounload data in the following situations:

EXPDP will useDIRECT_PATH mode if:

2.1. The structure of a table allows a Direct Path unload, i.e.: 
     - The table does not have fine-grained access controlenabled for SELECT. 
     - The table is not a queue table. 
     - The table does not contain one or more columns oftype BFILE or opaque, or an object type containing opaque columns. 
     - The table does not contain encrypted columns. 
     - The table does not contain a column of an evolved typethat needs upgrading. 
     - If the table has a column of datatype LONG or LONG RAW,then this column is the last column. 

2.2. The parameters QUERY, SAMPLE, or REMAP_DATA parameter were not used forthe specified table in the Export Data Pump job. 

2.3. The table or partition is relatively small (up to 250 Mb), or the table orpartition is larger, but the job cannot run in parallel because the parameterPARALLEL was not specified (or was set to 1). 

Note that with an unload of data in Direct Path mode, parallel I/Oexecuation Processes (PX processes) cannot be used to unload the data inparallel (paralllel unload is not supported in Direct Path mode). 

3. Export Data Pump: unloading data in "ExternalTables" mode.

Export Data Pump will use the "External Tables" mode tounload data in the following situations:

EXPDP willuse EXTERNAL_TABLE mode if:

3.1. Data cannot be unloaded in Direct Path mode, because of the structure ofthe table, i.e.: 
     - Fine-grained access control for SELECT is enabledfor the table. 
     - The table is a queue table. 
     - The table contains one or more columns of type BFILE oropaque, or an object type containing opaque columns. 
     - The table contains encrypted columns. 
     - The table contains a column of an evolved type that needsupgrading. 
     - The table contains a column of type LONG or LONG RAW thatis not last. 

3.2. Data could also have been unloaded in "Direct Path" mode, butthe parameters QUERY, SAMPLE, or REMAP_DATA were used for the specified tablein the Export Data Pump job. 

3.3. Data could also have been unloaded in "Direct Path" mode, butthe table or partition is relatively large (> 250 Mb) and parallel SQL canbe used to speed up the unload even more. 

Note that with an unload of data in External Tables mode, parallelI/O execuation Processes (PX processes) can be used to unload the data inparallel. In that case the Data Pump Worker process acts as the coordinator forthe PX processes. However, this does not apply when the table has a LOB column:in that case the table parallelism will always be 1. See also: 
Bug:5943346 "PRODUCT ENHANCEMENT: PARALLELISMOF DATAPUMP JOB ON TABLE WITH LOB COLUMN" 

4. Import Data Pump: loading data in "Direct Path"mode.

Import Data Pump will use the "Direct Path" mode to loaddata in the following situations:

IMPDP willuse DIRECT_PATH if:

4.1. The structure of a table allows a Direct Path load, i.e.: 
     - A global index does not exist on a multipartition tableduring a single-partition load. This includes object tables that arepartitioned. 
     - A domain index does not exist for a LOB column. 
     - The table is not in a cluster. 
     - The table does not have BFILE columns or columns ofopaque types. 
     - The table does not have VARRAY columns with an embeddedopaque type. 
     - The table does not have encrypted columns. 
     - Supplemental logging is not enabled or supplementallogging is enabled and the table does not have a LOB column. 
     - The table into which data is being imported is apre-existing table and: 
        – There is not an active trigger, and: 
        – The table is partitioned and has anindex, and: 
        – Fine-grained access control for INSERTmode is not enabled, and: 
        – A constraint other than table checkdoes not exist, and: 
        – A unique index does not exist. 

4.2 The parameters QUERY, REMAP_DATA parameter were not used for the specifiedtable in the Import Data Pump job. 

4.3. The table or partition is relatively small (up to 250 Mb), or the table orpartition is larger, but the job cannot run in parallel because the parameterPARALLEL was not specified (or was set to 1). 


5. Import Data Pump: loading data in "External Tables" mode.

Import Data Pump will use the "External Tables" mode toload data in the following situations:

IMPDP willuse EXTERNAL_TABLE if:

5.1. Data cannot be loaded in Direct Path mode, because at least one of thefollowing conditions exists: 
     - A global index on multipartition tables exists during asingle-partition load. This includes object tables that are partitioned. 
     - A domain index exists for a LOB column. 
     - A table is in a cluster. 
     - A table has BFILE columns or columns of opaque types. 
     - A table has VARRAY columns with an embedded opaque type. 
     - The table has encrypted columns. 
     - Supplemental logging is enabled and the table has atleast one LOB column. 
     - The table into which data is being imported is apre-existing table and at least one of the following conditions exists: 
        – There is an active trigger 
        – The table is partitioned and does nothave any indexes 
        – Fine-grained access control for INSERTmode is enabled for the table. 
        – An enabled constraint exists (otherthan table check constraints) 
        – A unique index exists 

5.2. Data could also have been loaded in "Direct Path" mode, but theparameters QUERY, or REMAP_DATA were used for the specified table in the ImportData Pump job. 

5.3. Data could also have been loaded in "Direct Path" mode, but thetable or partition is relatively large (> 250 Mb) and parallel SQL can beused to speed up the load even more.

Note that with a load of data in External Tables mode, parallelI/O execuation Processes (PX processes) can be used to load the data inparallel. In that case the Data Pump Worker process acts as the coordinator forthe PX processes. However, this does not apply when the table has a LOB column:in that case the table parallelism will always be 1. See also: 
Bug:5943346 "PRODUCT ENHANCEMENT: PARALLELISMOF DATAPUMP JOB ON TABLE WITH LOB COLUMN" 

6. How to enforce a specific load/unload method ?

In very specific situations, the undocumented parameterACCESS_METHOD can be used to enforce a specific method to unload or load thedata. Example:

%expdp system/manager ... ACCESS_METHOD=DIRECT_PATH  
%expdp system/manager ... ACCESS_METHOD=EXTERNAL_TABLE 

or:

%impdp system/manager ... ACCESS_METHOD=DIRECT_PATH  
%impdp system/manager ... ACCESS_METHOD=EXTERNAL_TABLE 

Important Need-To-Know's when the parameter ACCESS_METHOD is specified for a job:

  • The parameter ACCESS_METHOD is an undocumentedparameter andshould only be used when requested by Oracle Support.
  • If the parameter is not specified, then Data Pump will automaticallychoose the best method toload or unload the data.
  • If import Data Pump cannot choose due to conflictingrestrictions, an error will be reported:
    ORA-31696: unable to export/import TABLE_DATA:"SCOTT"."EMP"using client specified AUTOMATIC method
  • The parameter can only be specified when the Data Pump job is initiallystarted (i.e. theparameter cannot be specified when the job is restarted).
  • If the parameter is specified, the method of loading orunloading the data is enforced on all tables that need to be loaded or unloaded withthe job.
  • Enforcing a specific method may result in a slowerperformance ofthe overall Data Pump job, or errors such as:

... 
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA 
ORA-31696: unable to export/importTABLE_DATA:"SCOTT"."MY_TAB" using client specifiedDIRECT_PATH method 
... 

  • To determine which access method is used, a Workertrace file can be created, e.g.:

%expdp system/manager DIRECTORY=my_dir \ 
DUMPFILE=expdp_s.dmp LOGFILE=expdp_s.log \ 
TABLES=scott.my_tab TRACE=400300

The Worker trace file shows the method with which the data wasloaded (or unloaded for Import Data Pump):

... 
KUPW:14:57:14.289: 1: object: TABLE_DATA:"SCOTT"."MY_TAB" 
KUPW:14:57:14.289: 1: TABLE_DATA:"SCOTT"."MY_TAB" externaltable, parallel: 1 
...

For details, see also: 
Note:286496.1 " Export/Import DataPumpParameter TRACE - How to Diagnose Oracle Data Pump"

7. Known issues.

7.1. Bug 4722517 - Materialized view lognot updated after import into existing table 
Defect:  Bug:4722517 "MATERIALIZED VIEW LOG NOTUPDATED AFTER IMPORT DATAPUMP JOB INTO EXISTING TABLE" 
Symptoms:  amaterialized view is created with FAST REFRESH on a master table; if data isimported into this master table, then these changes (inserts) do not showup in the materialized view log
Releases: 10.1.0.2.0 and higher
Fixed in:  notapplicable, closed as not-a-bug
Patched files:  notapplicable 
Workaround:  ifpossible import into a temporary holding table then copy the data with"insert as select" into the master table
Cause:  a fastrefresh does not apply changes that result from bulk load operations onmasters, such as an INSERT with the APPEND hint used by Import Data Pump
Trace:  notapplicable, changes are not propagated
Remarks:  see also Note:340789.1 "Import Datapump (Direct Path)Does Not Update Materialized View Logs "

7.2. Bug 5599947 - Export Data Pump isslow when table has a LOB column
Defect:  Bug:5599947 "DATAPUMP EXPORT VERY SLOW"
Symptoms:  ExportData Pump has low performance when exporting table with LOB column
Releases:  11.1.0.6and below
Fixed in:  notapplicable, closed as not feasible to fix
Patched files:  notapplicable
Workaround:  ifpossible re-organize the large table with LOB column and make it partitioned
Cause:  if a tablehas a LOB column, and the unload or load takes place in "ExternalTables" mode, then we cannot make use of parallel I/O execution Processes(PX processes)
Trace:  notapplicable
Remarks:  see also Bug:5943346 "PRODUCT ENHANCEMENT: PARALLELISMOF DATAPUMP JOB ON TABLE WITH LOB COLUMN"

7.3. Bug 5941030 - Corrupt blocks afterImport Data Pump when table has LONG / LONG RAW column
Defect:  Bug:5941030 "Datapump import can producecorrupt blocks when there is a LONG / LONG RAW" 
Symptoms:  DirectPath import of a LONG / LONG RAW column can create corrupt blocks in thedatabase. If DB_BLOCK_CHECKING is enabled then an ORA-600 [6917] error can besignalled. If not then the corrupt block can cause subsequent problems, likeORA-1498 (block check failure) on an analyze of the table.
Releases:  11.1.0.6and below 
Fixed in: 10.2.0.5.0 and 11.1.0.7.0 and higher; for some platforms a fix on top of10.2.0.2.0 and on top of 10.2.0.3.0 is available with Patch:5941030
Patched files:  kdbl.o
Workaround:  ifpossible use the classic export and import clients to transfer this table
Cause:  internal issue withcolumn count when loading table with LONG/LONG RAW column in Direct Path mode
Trace:  notapplicable 
Remarks:  see also Note:457128.1 "Logical Corruption EncounteredAfter Importing Table With Long Column Using DataPump"

@ 8. For Support: Enhancement Requests.

@ Open Enhancement Requests: 

References

BUG:4722517 - MATERIALIZED VIEW LOG NOT UPDATEDAFTER IMPORT DATAPUMP JOB INTO EXISTING TABLE
BUG:4727162 - PRODUCT ENHANCEMENT: ADD NEWDATAPUMP EXT TAB ACCESS METHOD WITHOUT APPEND HINT
BUG:5599947 - DATAPUMP EXPORT VERY SLOW
BUG:5941030 - DATAPUMP IMPORT CAN CORRUPT DATAWHEN THERE IS A LONG / LONG RAW
BUG:5943346 - PRODUCT ENHANCEMENT: PARALLELISM OFDATAPUMP JOB ON TABLE WITH LOB COLUMN
NOTE:155477.1 - Parameter DIRECT: Conventional PathExport Versus Direct Path Export
NOTE:286496.1 - Export/Import DataPump ParameterTRACE - How to Diagnose Oracle Data Pump
NOTE:340789.1 - Import Datapump (Direct Path) DoesNot Update Materialized View Logs
NOTE:365459.1 - Parallel Capabilities of Oracle DataPump
NOTE:453895.1 - Checklist for Slow Performance ofExport Data Pump (expdp) and Import DataPump (impdp)
NOTE:457128.1 - Logical Corruption Encountered AfterImporting Table With Long Column Using DataPump
NOTE:469439.1 - IMPDP Can Fail with ORA-31696 ifACCESS_METHOD=DIRECT_PATH Is Manually Specified
http://www.oracle.com/technology/pub/notes/technote_pathvsext.html

相關推薦

impdp匯入資料ORA-31696錯誤

今天幫一同事從一個dump檔案中匯入資料到測試庫中,結果死活出現ORA-31696的錯誤: [[email protected]]$ impdp pebank/pebank directory=dumpdir dumpfile=mcj123.1011.dmp

Mybatis+Oracle, 資料多且日期間隔大, 查詢非常慢解決過程

前兩天發現一個sql在專案執行時查詢時間的很長, 但sql在PLSql中查詢時只要1s左右, 以下是原sql:SELECT MAX(data) DATA_VALUE FROM (     SELECT A.FREEZE_TIME, SUM(A.AP * T2.add_att

hadoop大資料叢集生產事故

陸續對原有的hadoop、hbase叢集做了擴容,增加了幾個節點,中間沒有重啟過,今天早上發現一個hregionserver服務停止了,就先啟動服務,沒想到啟動之後一直有訪問資料的出錯,嘗試對整個hbase叢集進行重啟出現了下面的錯誤: $ start-hbase.sh master running

誤刪資料

js中 0、' '、FALSE是等價的,null 和 undefined等價,那有沒有考慮過0 和 null什麼時間會等價呢,我用血淋淋的教訓告訴你; 業務場景是這樣的: 現在需要設定一個代理人,在員工請假有事時可以由此代理人提供服務;在設定代理人時,會把該員工所有許

幫朋友解決apache站點403錯誤的過程

efault 網上 .html blog x86 main comm 單獨 span apache版本: [root@iZ25eby2utyZ web]# rpm -qa | grep httpd httpd-tools-2.2.15-47.el6.centos.3

aws glue建立連線遇到的錯誤

使用的驅動是jdbc,然後該填的都填了,測試連線的時候彈出如下錯誤提示 "1 validation error detected: Value '25-十月-2018-7-02-上午-UTC' at 'logProperties.logStreamName' failed

ORACLE 11G匯入資料ORA-12154錯誤解析

      在Windows 7 (64 bits)安裝ORACLE資料庫,作為資料庫伺服器端,一般會安裝下面幾個部分。     (1)安裝 ORACLE 11G資料庫64位伺服器端。    (2)安裝 ORACLE 11G資料庫32位客戶端。    (3)安裝 insta

Zabbix延問題

什麽 text 無法連接到 img hostname lis col 9.png ive zabbix server隊列延時 問題:查看隊列數,發現隊列延時一致高於某個值,於是來查查是什麽問題導致。 1. 查看隊列延時詳細信息 我們可以看到延時都對應某個監控項。此

yum 安裝的報錯

mirror rpm 無法自動 配置文件 ras pac href devel 禁用 我電腦是centos 6.8,我先安裝了openslp-2.0.0-3.el6.x86_64.rpm 然後我更改了yum源配置文件,我將updates源給禁用了,只保留os源和extras

Oracle資料故障排除過程

前天在Oracle生產環境中,自己的儲存過程執行時間超過1小時,懷疑是其他job執行時間過長推遲了自己job執行時間,遂重新跑job,發現同測試環境的確不同,運行了25分鐘。 之後準備在測試環境中製造同數量級的資料進行分析,寫了大概如下的儲存過程, create or replace PROCEDU

RAC資料節點2自動重啟故障解決

        最近生產上出了很多“奇怪”的問題,比如下面要分享的一個故障,一套11.2.0.4 兩節點RAC資料庫,2節點的叢集元件會不定日期的重啟,但重啟的時間段比較固定,都是凌晨4:50左右。而且幾分鐘就會恢復

【oracle報錯】 impdp匯入資料報錯ORA-29283: invalid file operation

oracle impdp匯入資料報錯ORA-29283: invalid file operation 資料庫版本:11.2.0.4;系統版本:Oracle Linux 6.4 場景: 使用impdp匯入資料,dump檔名為bop_1112.dump,directory為:dataBac

ES節點擴容、資料遷移實踐

記一次ES節點擴容、資料遷移實踐 背景 之前ES叢集裡的資料越來越大,日增500G日誌資料,需要做一波擴容。 節點資訊 目前叢集中的節點資訊如下: 節點 CPU、MEM DISK 磁碟使用率 節點角色 es01

FastJSON和Jackson解析json遇到的中括號問題

follow rac blog com known ast 文章 驗證 oid 版權聲明:本文為博主原創文章,未經博主同意不得轉載。 https://blog.csdn.net/jadyer/article/details/243950

redo損壞導致ora-600[4000]的恢復

據說今天是光棍節,逢年過節,必有大事。快下班的時候,一位朋友碰到了一個大問題,資料庫伺服器異常斷電重啟以後,資料庫無法啟動,報ora-600[4000]錯誤,嘗試了使用隱藏引數,還是無法開啟。 ORA-00704: bootstrap process failure ORA

Oracle資料遷移中文亂碼問題

背景:公司打算將專案遷移到阿里雲,從原來的伺服器Oracle資料庫匯出資料然後匯入到本地伺服器的Oracle資料庫中,作為中轉站,阿里雲Oracle資料庫安裝完後再從本地資料庫匯出資料匯入到阿里雲中,匯入阿里雲後發現數據庫中文亂碼,一下便是處理資料庫中文亂碼的操作流程。(已知

MySQL基礎系列之 利用儲存過程實現2600萬資料水平分表

日常開發中我們經常會遇到大表的情況,所謂的大表是指儲存了百萬級乃至千萬級條記錄的表。這樣的表過於龐大,導致資料庫在查詢和插入的時候耗時太長,效能低下,如果涉及聯合查詢的情況,效能會更加糟糕。分表的目的就是減少資料庫的負擔,提高資料庫的效率,通常點來講就是提高表的增刪改查效率,本文將介紹我

sql使用遊標迴圈更新資料

過程: ALTER PROCEDURE [dbo].[updateCode] AS BEGIN -- routine body goes here, e.g. -- SELECT 'Navicat for SQL Server' declare @gid

MySQL資料誤刪-恢復體驗

資料誤刪了怎麼辦?本文也許能給您一定的提示。 一、檢視日誌 資料無意中發現不見了,怎麼辦? 也許首先想到的是去查日誌,找到問題原因,但是這個時間有可能會比較長,並且線上的業務在這段時間會收到影響。 因此,先不要去管什麼原因,首先應該做的第一件事情應當是資料恢復,保證正常的業務不受影響,而後再回過來查詢原

ORA-07445[opiaba()+639],ORA-00600[17147]資料庫異常中止故障案例

故障描述: 資料庫版本:11.2.0.3.0(單例項) 作業系統版本:redhat linux 6.5 資料庫在凌晨3點自動關閉,同事發現故障後,立馬手動startup開啟了資料庫,開啟後資料庫執行正常。 故障時資料庫alert日誌如下: Wed Oct 10 02: