Home > Performance, Trouble > Influence of new Oracle version – well actually not

Influence of new Oracle version – well actually not

Recently I have been working on application testing  related to a migration from 10.2.0.3 to 11.1.0.7.  New hardware as well. More and faster CPUs and significantly faster IO created some expectations.

So (as usual) some significantly slower batch jobs  were a bad, but somewhat expected surprise. Here is one example:

A batch job took 37 minutes instead of 15 minutes. I took one of the worst performing statements, taking 270 instead of 60 seconds, to investigate. Rather quickly the usual suspects, such as a changed plan, could be ruled out. As a matter of fact, after a while I could reduce the statement to a simple  reproducible test.

Preparation:


create table testsz
(i number, wa varchar2(5), vt varchar2(1), v varchar2(400));

create table testsz2
(i number, wa varchar2(5), vt varchar2(1), v varchar2(400));
create index testsz2_i on testsz2(wa,vt);

insert into testsz
with gen as (select rownum rn  from dual connect by level<=1e5)
select mod(rn, 1000),
       case  mod(rn,3) when 0 then 'A' when 1 then 'B' when 2 then 'C' else 'D' end||mod(rn, 2000),
       case when mod(rn,1) =0 then 'Y' else 'Q' end,
       lpad('x',300,'x'
 from gen;
commit;

Test:

set timing on

truncate table testsz2;

insert into testsz2
  select *
   from testsz;

The exact times vary depending on system and hardware, but the difference was significant on all systems tested.

Results on 10g vs 11g:   5 seconds vs 20 seconds.

This bugged me quite some time. I did all kinds of really stupid and other non-relevant stuff to investigate, to no avail. There must be something wrong with Oracle, right?

Finally I detected a increase in FB enqueue contention on 11g.  So there must be something wrong with the new Oracle release, right?

Looking for bug reports like mad, nothing found. (Not that the increase actually was significant, and if it had not been on ASSM tablespaces it would have been different anyway). Still, this triggered me to remember, that I asked a few weeks ago, about a changed init setting. But did not bother to follow up. Quickly checked, luckily the parameter  is session modifiable:

Modified test, on 11g only:

set timing on

prompt block checking false
alter session set db_block_checking=false;
truncate table testsz2;

insert into testsz2
select * from testsz;

prompt block checking true
alter session set db_block_checking=true;
truncate table testsz2;

insert into testsz2
select * from testsz;

3 seconds vs 20 seconds. So this has nothing to do with the new oracle release, but a changed initialization parameter. I have not tested much further about the causes, but the difference is also there in 10g.  At least on the systems I have access to.

Obviously the db_block_checking setting has an overhead, but frankly I am surprised by the magnitude, considering the quotes in the oracle documentation (from 11.1):

Block checking typically causes 1% to 10% overhead, depending on workload and the parameter value. The more updates or inserts in a workload, the more expensive it is to turn on block checking. You should set DB_BLOCK_CHECKING to FULL if the performance overhead is acceptable.

That difference is certainly not acceptable to me. Anyone else seen such a big difference?

Advertisements
Categories: Performance, Trouble
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: