oracle匯入和匯出,以及報錯的處理包括高版本的dmp匯入到低版本
匯入命令
imp 使用者名稱/密碼@你在tnsnames.ora中配置的服務名字 fromuser=你原來匯出的使用者名稱 touser=你現在要匯入到那個使用者 file=dmp檔案所在位置
imp znsh/[email protected] fromuser=heimdall touser=znsh file=d:\aa.dmp
如果匯入時,資料表已經存在,將報錯,對該表不會進行匯入;加上ignore=y即可,表示忽略現有表,在現有表上追加記錄。
imp znsh/[email protected] fromuser=heimdall touser=znsh file=d:\aa.dmp ignore=y
匯出命令:
exp 使用者名稱/密碼@你在tnsnames.ora中配置的服務名字 file=dmp檔案所在位置 owner=需要匯出的使用者名稱
exp heimdall/
如果是資料庫服務並不在你的本地,而是其他伺服器上的,
只需要在你本地的 tnsnames.ora 的配置檔案中配置的服務名就可以了。如你配置了一個服務名叫db199
現在要到處199上的資料
exp heimdall/[email protected] file=d:\aa.dmp owner=heimdall ;
-----------------------資料匯入可能遇到問題,以下有些解決方案
將oracle 11g匯出的dmp檔案匯入到oracle10g
1.進行匯入表操作,提示:
IMP-00010: 不是有效的匯出檔案, 頭部驗證失敗
IMP-00000: 未成功終止匯入
2.從網上查閱,oracle資料11g到10g是有問題的,一般由oracle10g客戶端去連oracle11g進行匯出資料操作然後匯入到oracle10g,如果手中只有oracle11g的dmp檔案,怎麼辦
3.從網上查閱,頭部驗證失敗是由於版本號不同所致,經試驗可以通過如下方法進行修改:
用notepad++工具開啟dmp檔案,可以看到頭部資訊 --TEXPORT:V11.01.00,即為源資料庫的版本號,將其修改為目的資料庫的版本號,如本機為10.02.01
4.再次進行匯入操作,匯入成功
導到大表之後,卡住不動了,等是要等的,不過等了一個小時還沒有好,嘗試如下設定
1、檢視當前使用的遊標數
select count(*) from v$open_cursor
2、檢視資料庫設定的遊標數
show parameter open_cursors; --這條語句需要再plsql的命令視窗執行,而不是sql視窗
3、如果當前使用的遊標數大於資料庫設定的遊標數,那麼修改遊標數大小
alter system set open_cursors=3000 scope=both;
imp 匯入資料時可否排除其中一張表不匯入
在IMP進行資料匯入時,能不能把不想導的一張或幾張表不進行匯入
Oracle9i及以前的imp,exp工具只能選擇表匯入匯出,不能排除表,在Oracle10g中的expdp和impdp增加了exclude引數,允許排除某些不匯入的表,物件型別等
在exp的版本中有兩種方法排除表的匯入和匯出
方法一:
加tables=(table1,table2,...,tablen)引數可匯入指定表
應該只能指定表,不能排除表
方法二:
在要匯入的使用者下把你不需要的表建起來,只要是同名
匯入時,加上ignore=n引數,這個表報錯,就不會匯入此表了
在expdp版本中
可以使用子句INCLUDE=TABLE:"LIKE 'TAB%'"來僅匯出那些名稱以 TAB 開頭的表。類似地,您可以使用結構INCLUDE=TABLE:"NOT LIKE 'TAB%'"來排除所有名稱以 TAB 開頭的表。作為另一種選擇,您可以使用EXCLUDE引數來排除特定的物件。
雖然expdp -help指明瞭exclude的語法:exclude=table:emp
但實際上會出錯。
正確的語法是exclude=table:"in ('EMP')"
例子:
C:>expdp oracle/oracle directory=testexpdp dumpfile=zzw_temp3.dmp exclude=TABLE
:"IN('TEST2')"
這是可以的
C:>expdp oracle/oracle directory=testexpdp dumpfile=zzw_temp3.dmp exclude=TABLE
:"IN ('TEST2','ZZW_TEMP2')"
但這是不行的,當排除多個表的時候不行,報ORA-39071: EXCLUDE 值的格式錯誤
需要增加轉義字元,應該這樣
C:>expdp oracle/oracle directory=testexpdp dumpfile=zzw_temp3.dmp exclude=TAB
LE:"IN ('TEST2','ZZW_TEMP2')"
歸納總結:
1.表名要大寫
2.排除多表的時候要注意使用轉義字元
3.排除表的時候,使用了exclude引數,就不要再使用schemas引數,如果有了schemas引數將對應的使用者方案的全部物件匯出
</pre><pre code_snippet_id="1667518" snippet_file_name="blog_20160430_13_3489885" name="code" class="sql">
</pre><pre code_snippet_id="1667518" snippet_file_name="blog_20160430_13_3489885" name="code" class="sql">
</pre><pre code_snippet_id="1667518" snippet_file_name="blog_20160430_13_3489885" name="code" class="sql">
</pre><pre code_snippet_id="1667518" snippet_file_name="blog_20160430_13_3489885" name="code" class="sql">
</pre><pre code_snippet_id="1667518" snippet_file_name="blog_20160430_13_3489885" name="code" class="sql">
</pre><pre code_snippet_id="1667518" snippet_file_name="blog_20160430_13_3489885" name="code" class="sql">
1、建立DIRECTORY
create directory dir_dp as 'D:\oracle\dir_dp';
2、授權
Grant read,write on directory dir_dp to lttfm;
(通過網路,1/2步驟省略)
--檢視目錄及許可權
SELECT privilege, directory_name, DIRECTORY_PATH FROM user_tab_privs t, all_directories d
WHERE t.table_name(+) = d.directory_name ORDER BY 2, 1;
--directory_name 用這個name引數就可以
--impdp gwm/
3、不通過expdp的步驟生成dmp檔案而直接匯入的方法:
--從源資料庫中向目標資料庫匯入表p_street_area
impdp gwm/gwm directory=dir_dp NETWORK_LINK=igisdb schemas=gwm logfile=p_street_area.log
(tables=p_street_area, job_name=my_job可以不用)
igisdb是目的資料庫與源資料的連結名,dir_dp是目的資料庫上的目錄
4、更換表空間
採用remap_tablespace引數
--匯出gwm使用者下的所有資料
expdp system/orcl directory=data_pump_dir dumpfile=gwm.dmp SCHEMAS=gwm
注:如果是用sys使用者匯出的使用者資料,包括使用者建立、授權部分,用自身使用者匯出則不含這些內容
--以下是將gwm使用者下的資料全部匯入到表空間gcomm(原來為gmapdata表空間下)下
impdp system/orcl directory=data_pump_dir dumpfile=gwm.dmp remap_tablespace=gmapdata:gcomm
在實際使用中,如果寫在引數檔案中,則不存在語法錯誤;但如果要在命令列中直接寫的話,就必須加上轉義字元:
Windows :
D:\> expdp system/manager DIRECTORY=my_dir DUMPFILE=exp_tab.dmp LOGFILE=exp_tab.log SCHEMAS=scott INCLUDE=TABLE:\”IN (’EMP’, ‘DEP’)\”
Unix :
% expdp system/manager DIRECTORY=my_dir DUMPFILE=exp_tab.dmp LOGFILE=exp_tab.log SCHEMAS=scott INCLUDE=TABLE:\”IN \(\’EMP\’, \’DEP\’\)\”
CREATE OR REPLACE DIRECTORY dir_dump AS '/archlog/backup/';
GRANT read,write ON DIRECTORY dir_dump TO public;
alter system set sga_max_size=5120M scope=spfile;
alter system set sga_target=5120M scope = spfile;
alter system set pga_aggregate_target=1500m scope=spfile;
alter system set sga_max_size=2048M scope=spfile;
alter system set sga_target=2048M scope = spfile;
create pfile='C:\oracle\product\10.2.0\admin\jssi\pfile\init.ora' from spfile;
startup pfile='C:\oracle\product\10.2.0\admin\jssi\pfile\init.ora.8302013235649';
Select Privilege, Directory_Name, Directory_Path
From User_Tab_Privs t, All_Directories d
Where t.Table_Name(+) = d.Directory_Name
Order By 2, 1
expdp phrep/phrep schemas=phrep directory=dir_dump dumpfile =phrep.dmp logfile=phrep.log job_name=phrep_job;
expdp phrep/phrep schemas=phrep directory=dir_dump dumpfile =phrep.dmp logfile=phrep.log job_name=phrep_job;
expdp phsi/phsi#zqzb001 schemas=phsi directory=dir_dump dumpfile =phsi.dmp logfile=phsi.log job_name=phsinew_job;
expdp sjqy/sjqy schemas=sjqy directory=dir_dump dumpfile =sjqy.dmp job_name=sjqy_job;
expdp /'/ as sysdba/' directory=backup full=y dumpfile=fullexp.dmp logfile=fullexp.log parallel=2 schemas=dave,bl version='10.2.0.1.0';
impdp unilg/*** dumpfile=catalogtbs.dmp directory=dump_dir transport_datafiles='D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\catalogtbs01.dbf' version='10.2.0.1.0';
Export: Release 11.2.0.1.0 - Production on Wed Apr 16 22:02:10 2014
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
The Data Pump export utility provides a mechanism for transferring data objects
between Oracle databases. The utility is invoked with the following command:
Example: expdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp
You can control how Export runs by entering the 'expdp' command followed
by various parameters. To specify parameters, you use keywords:
Format: expdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
Example: expdp scott/tiger DUMPFILE=scott.dmp DIRECTORY=dmpdir SCHEMAS=scott
or TABLES=(T1:P1,T1:P2), if T1 is partitioned table
USERID must be the first parameter on the command line.
------------------------------------------------------------------------------
The available keywords and their descriptions follow. Default values are listed within square brackets.
ATTACH
Attach to an existing job.
For example, ATTACH=job_name.
COMPRESSION
Reduce the size of a dump file.
Valid keyword values are: ALL, DATA_ONLY, [METADATA_ONLY] and NONE.
CONTENT
Specifies data to unload.
Valid keyword values are: [ALL], DATA_ONLY and METADATA_ONLY.
DATA_OPTIONS
Data layer option flags.
Valid keyword values are: XML_CLOBS.
DIRECTORY
Directory object to be used for dump and log files.
DUMPFILE
Specify list of destination dump file names [expdat.dmp].
For example, DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ENCRYPTION
Encrypt part or all of a dump file.
Valid keyword values are: ALL, DATA_ONLY, ENCRYPTED_COLUMNS_ONLY, METADATA_ONLY and NONE.
ENCRYPTION_ALGORITHM
Specify how encryption should be done.
Valid keyword values are: [AES128], AES192 and AES256.
ENCRYPTION_MODE
Method of generating encryption key.
Valid keyword values are: DUAL, PASSWORD and [TRANSPARENT].
ENCRYPTION_PASSWORD
Password key for creating encrypted data within a dump file.
ESTIMATE
Calculate job estimates.
Valid keyword values are: [BLOCKS] and STATISTICS.
ESTIMATE_ONLY
Calculate job estimates without performing the export.
EXCLUDE
Exclude specific object types.
For example, EXCLUDE=SCHEMA:"='HR'".
FILESIZE
Specify the size of each dump file in units of bytes.
FLASHBACK_SCN
SCN used to reset session snapshot.
FLASHBACK_TIME
Time used to find the closest corresponding SCN value.
FULL
Export entire database [N].
HELP
Display Help messages [N].
INCLUDE
Include specific object types.
For example, INCLUDE=TABLE_DATA.
JOB_NAME
Name of export job to create.
LOGFILE
Specify log file name [export.log].
NETWORK_LINK
Name of remote database link to the source system.
NOLOGFILE
Do not write log file [N].
PARALLEL
Change the number of active workers for current job.
PARFILE
Specify parameter file name.
QUERY
Predicate clause used to export a subset of a table.
For example, QUERY=employees:"WHERE department_id > 10".
REMAP_DATA
Specify a data conversion function.
For example, REMAP_DATA=EMP.EMPNO:REMAPPKG.EMPNO.
REUSE_DUMPFILES
Overwrite destination dump file if it exists [N].
SAMPLE
Percentage of data to be exported.
SCHEMAS
List of schemas to export [login schema].
SOURCE_EDITION
Edition to be used for extracting metadata.
STATUS
Frequency (secs) job status is to be monitored where
the default [0] will show new status when available.
TABLES
Identifies a list of tables to export.
For example, TABLES=HR.EMPLOYEES,SH.SALES:SALES_1995.
TABLESPACES
Identifies a list of tablespaces to export.
TRANSPORTABLE
Specify whether transportable method can be used.
Valid keyword values are: ALWAYS and [NEVER].
TRANSPORT_FULL_CHECK
Verify storage segments of all tables [N].
TRANSPORT_TABLESPACES
List of tablespaces from which metadata will be unloaded.
VERSION
Version of objects to export.
Valid keyword values are: [COMPATIBLE], LATEST or any valid database version.
------------------------------------------------------------------------------
The following commands are valid while in interactive mode.
Note: abbreviations are allowed.
ADD_FILE
Add dumpfile to dumpfile set.
CONTINUE_CLIENT
Return to logging mode. Job will be restarted if idle.
EXIT_CLIENT
Quit client session and leave job running.
FILESIZE
Default filesize (bytes) for subsequent ADD_FILE commands.
HELP
Summarize interactive commands.
KILL_JOB
Detach and delete job.
PARALLEL
Change the number of active workers for current job.
REUSE_DUMPFILES
Overwrite destination dump file if it exists [N].
START_JOB
Start or resume current job.
Valid keyword values are: SKIP_CURRENT.
STATUS
Frequency (secs) job status is to be monitored where
the default [0] will show new status when available.
STOP_JOB
Orderly shutdown of job execution and exits the client.
Valid keyword values are: IMMEDIATE.