Today I did a reorganization on 1,7TB tablespace, finally reduced to 620GB. (I have to have 10% free in a tablespace to not cause alerts). There were some tables arround 30GB with smaller LOBs and two tables with one 1TB LOB and 500GB LOB. To know it will work a testcase on the standby database was necessary. What I found out is following. When you have these two table with a structure like:
ID PRIMARY KEY VARCHAR2(40), DESCRIPTION VARCHAR2(255), SIZE NUMBER, DATA BLOB the ID was something like a HEX number so it’s not the real query I use here, there was a sustr and ascii function too, Description of the data, size of the blob and finally the file or something like that, it’s a doc table, I don’t know what the application stores, for a DBA it’s not necessary to know. what I did to reorganize it was:
- a new tablespace
create table new as select * from current where rownum<=0;
8 sessions with: insert /*+ append */ into new select * from current where mod(id,8) = 0-7 ;
Here I found out that when you start them parallel, you get an enqueue: TM contention on sessions 1-7. Funny, so I googled and discovered to solve it I need to create a foreign key.
alter table new add foreign key fk_new_id referencing current(id) ; After I did this and restarted all the insert sessions I got DIRECT PATH READ on all sessions and a THROUGHPUT of arround 1,3GB/min. The longest INSERT took 1 hour 30 minutes, so in this time a reorg on 500GB LOB was possible, 20 datafiles on one filesystem. Finally it had arround 160GB the LOB.
alter table new drop constraint fk_new_id; alter table new add constraint pk_id primary key id tablespace index_tbs; On the 1TB LOB I used the same procedure and finished in 4 hours and 30 minutes. So I reduced 1,1TB from 23:00 until 06:00. Well, it’s interresting that when you use parallel insert you get a contention and a foreign key helps to solve it.