Oracle體系結構學習筆記
Oracle體系結構由例項和一組資料檔案組成,例項由SGA記憶體區,SGA意思是共享記憶體區,由share pool(共享池)、data buffer(資料緩衝區)、log buffer(日誌緩衝區)組成

SGA記憶體區的share pool是解析SQL並儲存執行計劃的,然後SQL根據執行計劃獲取資料時先看data buffer裡是否有資料,沒資料才從磁碟讀,然後還是讀到data buffer裡,下次就直接讀data buffer的,當SQL更新時,data buffer的資料就必須寫入磁碟備份,為了保護這些資料,才有log buffer,這就是大概的原理簡介
系統結構關係圖如圖,圖來自《收穫,不止SQL優化》一書:
下面介紹共享池、資料緩衝、日誌緩衝方面調優的例子
共享池相關例子
未使用使用繫結變數的情況,進行一下批量寫資料,在登入系統,經常用的sql是 select * from sys_users where username='admin'
或者什麼什麼的,假如有很多使用者登入,就需要執行很多次這樣類似的sql,能不能用一條SQL代表?意思是不需要Oracle優化器每次都解析sql獲取執行計劃,對於這種類似的sql是沒必要的,Oracle提供了繫結變數的方法,可以用於調優sql,然後一堆sql就可以用
select * from sys_users where username=:x
這裡用一個變數x表示,具體例子如下,
新建一張表來測試
create table t (x int);
不使用繫結遍歷,批量寫資料
begin for i in 1 .. 1000 loop execute immediate 'insert into t values('|| i ||')'; commit; end loop; end; /
輸出
已用時間: 00: 00: 00.80
加上繫結遍歷,繫結變數是用:x的形式
begin for i in 1 .. 100 loop execute immediate 'insert into t values( :x )' using i; commit; end loop; end; /
已用時間: 00: 00: 00.05
資料緩衝相關例子
這裡介紹和資料快取相關例子
(1) 清解析快取
//建立一個表來測試 SQL> create table t as select * from dba_objects; 表已建立。 //設定列印行數 SQL> set linesize 1000 //設定執行計劃開啟 SQL> set autotrace on //打印出時間 SQL> set timing on //查詢一下資料 SQL> select count(1) from t; COUNT(1) ---------- 72043 已用時間:00: 00: 00.10 //清一下緩衝區快取(ps:這個sql不能隨便在生產環境執行) SQL> alter system flush buffer_cache; 系統已更改。 已用時間:00: 00: 00.08 //清一下共享池快取(ps:這個sql不能隨便在生產環境執行) SQL> alter system flush shared_pool; //再次查詢,發現查詢快了 SQL> select count(1) from t; COUNT(1) ---------- 72043 已用時間:00: 00: 00.12 SQL>
日誌緩衝相關例子
這裡說明一下,日誌關閉是可以提供效能的,不過在生生產環境還是不能隨便用,只能說是一些特定建立,SQL如:
alter table [表名] nologging;
調優拓展知識
這些是看《收穫,不止SQL優化》一書的小記
(1) 批量寫資料事務問題
對於迴圈批量事務提交的問題,commit放在迴圈內和放在迴圈外的區別,
放在迴圈內,每次執行就提交一次事務,這種時間相對比較少的
begin for i in 1 .. 1000 loop execute immediate 'insert into t values('|| i ||')'; commit; end loop; end;
放在迴圈外,sql迴圈成功,再提交一次事務,這種時間相對比較多一點
begin for i in 1 .. 1000 loop execute immediate 'insert into t values('|| i ||')'; end loop; commit; end;
《收穫,不止SQL優化》一書提供的指令碼,用於檢視邏輯讀、解析、事務數等等情況:
select s.snap_date, decode(s.redosize, null, '--shutdown or end--', s.currtime) "TIME", to_char(round(s.seconds / 60, 2)) "elapse(min)", round(t.db_time / 1000000 / 60, 2) "DB time(min)", s.redosize redo, round(s.redosize / s.seconds, 2) "redo/s", s.logicalreads logical, round(s.logicalreads / s.seconds, 2) "logical/s", physicalreads physical, round(s.physicalreads / s.seconds, 2) "phy/s", s.executes execs, round(s.executes / s.seconds, 2) "execs/s", s.parse, round(s.parse / s.seconds, 2) "parse/s", s.hardparse, round(s.hardparse / s.seconds, 2) "hardparse/s", s.transactions trans, round(s.transactions / s.seconds, 2) "trans/s" from (select curr_redo - last_redo redosize, curr_logicalreads - last_logicalreads logicalreads, curr_physicalreads - last_physicalreads physicalreads, curr_executes - last_executes executes, curr_parse - last_parse parse, curr_hardparse - last_hardparse hardparse, curr_transactions - last_transactions transactions, round(((currtime + 0) - (lasttime + 0)) * 3600 * 24, 0) seconds, to_char(currtime, 'yy/mm/dd') snap_date, to_char(currtime, 'hh24:mi') currtime, currsnap_id endsnap_id, to_char(startup_time, 'yyyy-mm-dd hh24:mi:ss') startup_time from (select a.redo last_redo, a.logicalreads last_logicalreads, a.physicalreads last_physicalreads, a.executes last_executes, a.parse last_parse, a.hardparse last_hardparse, a.transactions last_transactions, lead(a.redo, 1, null) over(partition by b.startup_time order by b.end_interval_time) curr_redo, lead(a.logicalreads, 1, null) over(partition by b.startup_time order by b.end_interval_time) curr_logicalreads, lead(a.physicalreads, 1, null) over(partition by b.startup_time order by b.end_interval_time) curr_physicalreads, lead(a.executes, 1, null) over(partition by b.startup_time order by b.end_interval_time) curr_executes, lead(a.parse, 1, null) over(partition by b.startup_time order by b.end_interval_time) curr_parse, lead(a.hardparse, 1, null) over(partition by b.startup_time order by b.end_interval_time) curr_hardparse, lead(a.transactions, 1, null) over(partition by b.startup_time order by b.end_interval_time) curr_transactions, b.end_interval_time lasttime, lead(b.end_interval_time, 1, null) over(partition by b.startup_time order by b.end_interval_time) currtime, lead(b.snap_id, 1, null) over(partition by b.startup_time order by b.end_interval_time) currsnap_id, b.startup_time from (select snap_id, dbid, instance_number, sum(decode(stat_name, 'redo size', value, 0)) redo, sum(decode(stat_name, 'session logical reads', value, 0)) logicalreads, sum(decode(stat_name, 'physical reads', value, 0)) physicalreads, sum(decode(stat_name, 'execute count', value, 0)) executes, sum(decode(stat_name, 'parse count (total)', value, 0)) parse, sum(decode(stat_name, 'parse count (hard)', value, 0)) hardparse, sum(decode(stat_name, 'user rollbacks', value, 'user commits', value, 0)) transactions from dba_hist_sysstat where stat_name in ('redo size', 'session logical reads', 'physical reads', 'execute count', 'user rollbacks', 'user commits', 'parse count (hard)', 'parse count (total)') group by snap_id, dbid, instance_number) a, dba_hist_snapshot b where a.snap_id = b.snap_id and a.dbid = b.dbid and a.instance_number = b.instance_number order by end_interval_time)) s, (select lead(a.value, 1, null) over(partition by b.startup_time order by b.end_interval_time) - a.value db_time, lead(b.snap_id, 1, null) over(partition by b.startup_time order by b.end_interval_time) endsnap_id from dba_hist_sys_time_model a, dba_hist_snapshot b where a.snap_id = b.snap_id and a.dbid = b.dbid and a.instance_number = b.instance_number and a.stat_name = 'DB time') t where s.endsnap_id = t.endsnap_id order by s.snap_date, time desc;