Posted in 2015

Indexing only specific rows, IT WORKS NOT – fixed (works)

So, last year I was in Belgrade on asktoms seminar (Belgrade Seminar 2014.zip on asktom.oracle.com), now I have time to play, so I wanted to test indexing only specific rows, I decided 1.

As you will see you can index specific values, but some additional “objects” are necessary to let it work.

SQL> create table test (id_test number generated always as (case when id=1 then 1 else null end), id number) ;

Table created.

SQL> create index idx_test on test (id_test) ;

Index created.

SQL> insert into test (id) select level from dual connect by level /

100 rows created.

SQL> /

100 rows created.

SQL> /

100 rows created.

SQL> commit ;

Commit complete.
SQL> exec dbms_stats.gather_table_stats(user, 'test') ;

PL/SQL procedure successfully completed.

SQL> select DISTINCT_KEYS,BLEVEL,LEAF_BLOCKS,NUM_ROWS,SAMPLE_SIZE from user_indexes where index_name='IDX_TEST' ;

DISTINCT_KEYS BLEVEL LEAF_BLOCKS NUM_ROWS SAMPLE_SIZE
 ------------- ---------- ----------- ---------- -----------
 1 0 1 4 4

SQL> set autotrace on
 SQL> select count(*) from test where id_test=1 ;

COUNT(*)
 ----------
 4
Execution Plan
 ----------------------------------------------------------
 Plan hash value: 2901380809

------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
 ------------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | 1 | 2 | 1 (0)| 00:00:01 |
 | 1 | SORT AGGREGATE | | 1 | 2 | | |
 |* 2 | INDEX RANGE SCAN| IDX_TEST | 4 | 8 | 1 (0)| 00:00:01 |
 ------------------------------------------------------------------------------

OR

SQL> create table test2 (id number) ;

Table created.

SQL> insert into test2 select level from dual connect by level /

100 rows created.

SQL> /

100 rows created.

SQL> /

100 rows created.

SQL> commit ;

Commit complete.

SQL> create index idx_test on test2 (case when id=1 then 1 else null end) ;

Index created.

create or replace view v_test as select t."ID", (case when id=1 then 1 else null end) index_id from test2 t ;

SQL> select count(*) from v_test where index_id=1 ;

COUNT(*)
 ----------
 4

Execution Plan
 ----------------------------------------------------------
 Plan hash value: 2901380809

------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
 ------------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | 1 | 2 | 1 (0)| 00:00:01 |
 | 1 | SORT AGGREGATE | | 1 | 2 | | |
 |* 2 | INDEX RANGE SCAN| IDX_TEST | 4 | 8 | 1 (0)| 00:00:01 |
 ------------------------------------------------------------------------------
Advertisements
Posted in 2015

How to create a 12c Container DB manually

Hi,

as the world is going always forward, I cannot wait until my customer decides to use the new 12c technology, virtualization rocks, so my virtualbox rocks too. I know it costs money if you use it in production so it’s as always, the money rules the world. I have googled a lot to find how to create a container database manually, but I could not find a manual with every step.

You only need to run the specific sqls and you can have a container empty without any component

Manual 12c Container DB

Posted in 2015

TM contention and INSERT /*+ APPEND */ parallel sessions?

Today I did a reorganization on 1,7TB tablespace, finally reduced to 620GB. (I have to have 10% free in a tablespace to not cause alerts). There were some tables arround 30GB with smaller LOBs and two tables with one 1TB LOB and 500GB LOB. To know it will work a testcase on the standby database was necessary. What I found out is following. When you have these two table with a structure like: ID PRIMARY KEY VARCHAR2(40), DESCRIPTION VARCHAR2(255), SIZE NUMBER, DATA BLOB the ID was something like a HEX number so it’s not the real query I use here, there was a sustr and ascii function too, Description of the data, size of the blob and finally the file or something like that, it’s a doc table, I don’t know what the application stores, for a DBA it’s not necessary to know. what I did to reorganize it was:

  • a new tablespace
  • create table new as select * from current where rownum<=0;
  • 8 sessions with: insert /*+ append */ into new select * from current where mod(id,8) = 0-7 ;

Here I found out that when you start them parallel, you get an enqueue: TM contention on sessions 1-7. Funny, so I googled and discovered to solve it I need to create a foreign key. alter table new add foreign key fk_new_id referencing current(id) ; After I did this and restarted all the insert sessions I got DIRECT PATH READ on all sessions and a THROUGHPUT of arround 1,3GB/min. The longest INSERT took 1 hour 30 minutes, so in this time a reorg on 500GB LOB was possible, 20 datafiles on one filesystem. Finally it had arround 160GB the LOB. alter table new drop constraint fk_new_id; alter table new add constraint pk_id primary key id tablespace index_tbs; On the 1TB LOB I used the same procedure and finished in 4 hours and 30 minutes. So I reduced 1,1TB from 23:00 until 06:00. Well, it’s interresting that when you use parallel insert you get a contention and a foreign key helps to solve it.

Posted in 2015

Little Statspack Performance monitoring SQL

Well, you know Average Active Sessions, if you don’t have AWR than this is a way how to get some information, use Statspack. The query displays how many users were working(active) in a second.

select t.snap_time,round(t.dbtime/t.elapsed,2) as "AvgActiveSessions" from (
select
snap_time,
stat_name,
nvl(round((l.value-lag(l.value) over (order by l.snap_id))/1000000,2),0) as dbtime,
nvl((s.snap_time-lag(s.snap_time) over (order by s.snap_id asc))*24*60*60,0) as elapsed
from stats$snapshot s, v$sys_time_model m, stats$sys_time_model l where
m.stat_id=l.stat_id and s.snap_id=l.snap_id
and m.stat_name='DB Time'
and snap_time between &timelow and &timehigh
order by 1 asc ) t
where t.dbtime!=0
/

The same query can be used for the statname DB CPU, so it means it should be smaller than the AAS. It says how much CPU seconds were used in a second.

select g.snap_time,round(g.cpu/g.elapsed,2) from (
select
t.snap_time,
round((n.value-lag(n.value,1,n.value) over (order by n.snap_id)),0)/100 cpu,
nvl((t.snap_time-lag(t.snap_time) over (order by t.snap_id asc))*24*60*60,0) as elapsed
from stats$sysstat n,stats$snapshot t
where n.snap_id in (select snap_id from stats$snapshot where snap_time between &timelow and &timehigh )
and n.name ='CPU used by this session' and t.snap_id=n.snap_id order by 1,2 ) g
where g.cpu!=0
/

This query shows the same information about the CPU usage, of course it’s not the same as the underlying tables are not refreshed in the same time (v$sysstat and v$sys_time_model – I know these are views, but I mean the underlying tables)

Example:
AAS = 1,2
DB CPU = 0,8

There were 1,2 active sessions per second and they used 0,8 seconds per Second on CPU. Great.. now if you divide this 0,8 with CPU_COUNT*100 you get the CPU usage in %

Posted in 2015

Gather stats on tables with data

Sometimes during support of upgrades of apps we have some tasks to gather statistics on tables. I have tried to implement a procedure which is gathering it. The input is the schema name. As I cannot implement procedures just because it makes things easier I have implemented it on a test database and tested it with my own schema. Well, it works.

create or replace PROCEDURE sys$gather_tab_stats(schema_name IN varchar2) as
cursor c1 is select table_name from dba_tables where owner=schema_name ;
cursor_name integer ;
output_tabs user_tables%ROWTYPE ;
rows_processed integer ;
cnt number ;
begin_time number ;
end_time number ;
BEGIN
FOR output_tabs IN c1
LOOP
dbms_output.put_line('Processing table: '||output_tabs.table_name) ;
cursor_name := dbms_sql.open_cursor ;
dbms_sql.parse(cursor_name,'select count(*) from '||schema_name||'.'||output_tabs.table_name||' sample(10)',DBMS_SQL.NATIVE);
dbms_sql.define_column(cursor_name,1,cnt) ;
rows_processed := DBMS_SQL.EXECUTE(cursor_name);
IF DBMS_SQL.FETCH_ROWS(cursor_name)>0
THEN
DBMS_SQL.COLUMN_VALUE(cursor_name, 1, cnt) ;
dbms_output.put_line('COUNT: '||cnt);
IF cnt>0 THEN
begin_time := DBMS_UTILITY.GET_TIME;
DBMS_STATS.GATHER_TABLE_STATS(schema_name,output_tabs.table_name) ;
end_time := DBMS_UTILITY.GET_TIME;
dbms_OUTPUT.PUT_LINE('elapsed time: '||to_char((end_time-begin_time)/100)||' secs.') ;
END IF;
END IF;
DBMS_SQL.CLOSE_CURSOR(cursor_name);
END LOOP ;
EXCEPTION
WHEN OTHERS THEN
raise_application_error(-20001,'An error was encountered - '||SQLCODE||' -ERROR- '||SQLERRM);
DBMS_SQL.CLOSE_CURSOR(cursor_name);
END;
/

Usage:
exec sys$gather_tab_stats('TEST_SCHEMA')

Posted in 2015

Monitoring redo generation

Sometimes I am checking how much redo is generated during the day to know if I have enough space in the arch destination during reorgs.

To check redolog generation I use:

break on SUBSTRING skip 1
compute sum label 'TOTAL' of MB on SUBSTRING
set lines 180 pages 999


select count(1) "Rate" ,trunc(FIRST_TIME,'HH24') "By Hour",round(sum(blocks*block_size)/1024/1024,2) MB, trunc(FIRST_TIME,'DD') as substring
from v$archived_log where dest_id=1 group by trunc(first_time,'HH24'),trunc(FIRST_TIME,'DD') order by 2 ;

or:

PROMPT Max redo in MB
with max_day as (
select trunc(FIRST_TIME,'DD') as day from v$archived_log group by trunc(FIRST_TIME,'DD') having count(*)=(select max(cnt) from (select count(*) cnt,trunc(FIRST_TIME,'DD') from v$archived_log group by trunc(FIRST_TIME,'DD')))
)
select sum(blocks*block_size)/1024/1024 "MB per Max Day",max_day.day from v$archived_log,max_day where trunc(FIRST_TIME,'DD')=max_day.day and dest_id=1 group by max_day.day;

Sometimes you have to decide how much storage do you need for a database. You are not an oracle who knows how the application works, about the workload and other topics. Developers should say how much is needed in the beginning. After some time it’s possible even for you:

  • We add 25% more storage, it should be enough for 24 hours.


with l as (select distinct bytes/1024/1024/1024 as siz from v$log),
c as (select max(cnt) cntl from (select count(1) cnt,trunc(FIRST_TIME,'DD') sdate from v$archived_log where dest_id=1 group by trunc(FIRST_TIME,'DD')))
select l.siz*c.cntl*1.25 "Size for arch dest for 24h" from l,c ;