11gR2 ASM – Inside story – Part4 (ASM Preferred Read Failure Group)
Introduction:
In the previous post of “11g ASM – Inside story – Part3″, we have seen how to to create a datapump dumpset within ASM diskgroup.
In this Part4 of the story, we will see how to configure the ASM instance with a list of preferred disk failure group names to use when accessing ASM disks.
This new 11g parameter is really useful especially for Real Application Clusters (RAC) instances on disks miles apart.
You can configure each instance to be able to read from the closest ASM disk.
The demonstration:
In this demonstration, we will create a new diskgroup named “DATA_RAC” with 2 failure group of disks.
In the database instance, we will create a new tablespace datafile “DATA_RAC_READ” stored in the new DATA_RAC disk group.
We will create a new table TEST_READ stored in the DATA_RAC_READ datafile.
We will make some disk read statistics on both disk group failures.
We will change the new ASM_PREFRERRED_READ_FAILURE_GROUP parameter
We will load the system and check the I/O calls via V$ASM_DISK_IOSTAT view and via V$ASM_DISK (GV$ASM_DISK in the RAC configuration).
We will make some disk read statistics on both disk group failures after parameter change.
1
2
3
4
5
6
7
SQL> create diskgroup DATA_RAC NORMAL REDUNDANCY
2 FAILGROUP FG1 DISK '/dev/raw/raw5'
3 FAILGROUP FG2 DISK '/dev/raw/raw6';
Diskgroup created.
SQL>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
SQL> create tablespace DATA_RAC_READ DATAFILE '+DATA_RAC';
Tablespace created.
SQL> create table test_read (NAME varchar2(10)) tablespace DATA_RAC_READ;
Table created.
SQL> insert into test_read
2 select rownum
3 from dual
4 connect by level <= 2e3;
2000 rows created.
SQL> commit;
Commit complete.
SQL>
From ASM Instance:
1
2
3
4
5
6
7
8
9
10
SQL> select FAILGROUP, SUM(READS),SUM(WRITES) from v$ASM_DISK
2 WHERE GROUP_NUMBER=2
3 GROUP BY FAILGROUP;
FAILGROUP SUM(READS) SUM(WRITES)
------------------------------ ---------- -----------
FG1 62 824
FG2 22 824
SQL>
From DB instance:
1
2
3
4
5
SQL> SELECT COUNT(*) from TEST_READ;
COUNT(*)
----------
22000
From ASM instance, note that FG1 has more I/O reads than FG2, but, then the I/O is distributed between FG1 and FG2.
I have seen a ratio approximatively of 1:1 distribution, increasing with the load.
1
2
3
4
5
6
7
8
9
10
SQL> select FAILGROUP, SUM(READS),SUM(WRITES) from v$ASM_DISK
2 WHERE GROUP_NUMBER=2
3 GROUP BY FAILGROUP;
FAILGROUP SUM(READS) SUM(WRITES)
------------------------------ ---------- -----------
FG1 63 867
FG2 23 867
SQL>
From the ASM Instance, set the ASM_PREFERRED_READ_FAILURE_GROUPS to FG2. Note that the parameter is dynamic, there is no need to bounce, dismount or remount the diskgroup.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
SQL> ALTER SYSTEM SET ASM_PREFERRED_READ_FAILURE_GROUPS ='DATA_RAC.FG2';
System altered.
SQL> select FAILGROUP, NAME, PREFERRED_READ
2 from v$ASM_DISK;
FAILGROUP NAME P
------------------------------ ------------------------------ -
DISK4 DISK4 U
DISK3 DISK3 U
DISK2 DISK2 U
DISK1 DISK1 U
FG2 DATA_RAC_0001 Y
FG1 DATA_RAC_0000 N
7 rows selected.
SQL>
Note the value of “Y” in the PREFERRED_READ column of V$ASM_DISK view.
Now from the DB instance, we will make some I/O loads, by for example, running the statements and scripts below for several times;
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
SQL> drop table test_io purge
/
create table test_IO (ID VARCHAR2(1000)) TABLESPACE DATA_RAC_READ
/
insert into test_io
select rownum AS ID
from dual
connect by level <= 2e3; 2
Table dropped.
SQL> 2
Table created.
SQL> 2 3 4
2000 rows created.
SQL> commit;
Commit complete.
/
SQL> declare
W_ VARCHAR2(10);
BEGIN
for i in 1..10000 loop
select count(*) into W_ from test_IO WHERE ID=I;
end loop;
end;
/
During the load, in the ASM instance, we notice more reads on the FG2 preferred failure disk.
ASM Instance:
1
2
3
4
5
6
7
8
9
10
SQL> select FAILGROUP, SUM(READS),SUM(WRITES) from v$ASM_DISK
2 WHERE GROUP_NUMBER=2
3 GROUP BY FAILGROUP;
FAILGROUP SUM(READS) SUM(WRITES)
------------------------------ ---------- -----------
FG1 132 1793
FG2 136 1793
SQL>
I have noticed a ratio of 1:2 reads between preferred disk FG1 and FG2.
Note we had the following I/O statistics before changing the parameter; of course, as you may noticed my test environment is not a real busy system!
1
2
3
4
5
6
7
8
SQL> SELECT FAILGROUP, READS, WRITES
from v$ASM_DISK_IOSTAT
WHERE GROUP_NUMBER = 2
; 2 3 4
FAILGROUP READS WRITES
------------------------------ ---------- ----------
FG2 1 240
FG1 7 240
Now I/O statistics are showing the following values, after changing the parameter; more reads on FG2.
1
2
3
4
5
6
7
8
9
10
11
SQL> SELECT FAILGROUP, READS, WRITES
from v$ASM_DISK_IOSTAT
WHERE GROUP_NUMBER = 2
; 2 3 4
FAILGROUP READS WRITES
------------------------------ ---------- ----------
FG2 44 324
FG1 7 324
SQL>
Introduction:
In the previous post of “11g ASM – Inside story – Part3″, we have seen how to to create a datapump dumpset within ASM diskgroup.
In this Part4 of the story, we will see how to configure the ASM instance with a list of preferred disk failure group names to use when accessing ASM disks.
This new 11g parameter is really useful especially for Real Application Clusters (RAC) instances on disks miles apart.
You can configure each instance to be able to read from the closest ASM disk.
The demonstration:
In the previous post of “11g ASM – Inside story – Part3″, we have seen how to to create a datapump dumpset within ASM diskgroup.
In this Part4 of the story, we will see how to configure the ASM instance with a list of preferred disk failure group names to use when accessing ASM disks.
This new 11g parameter is really useful especially for Real Application Clusters (RAC) instances on disks miles apart.
You can configure each instance to be able to read from the closest ASM disk.
The demonstration:
In this demonstration, we will create a new diskgroup named “DATA_RAC” with 2 failure group of disks.
In the database instance, we will create a new tablespace datafile “DATA_RAC_READ” stored in the new DATA_RAC disk group.
We will create a new table TEST_READ stored in the DATA_RAC_READ datafile.
We will make some disk read statistics on both disk group failures.
We will change the new ASM_PREFRERRED_READ_FAILURE_GROUP parameter
We will load the system and check the I/O calls via V$ASM_DISK_IOSTAT view and via V$ASM_DISK (GV$ASM_DISK in the RAC configuration).
We will make some disk read statistics on both disk group failures after parameter change.
In the database instance, we will create a new tablespace datafile “DATA_RAC_READ” stored in the new DATA_RAC disk group.
We will create a new table TEST_READ stored in the DATA_RAC_READ datafile.
We will make some disk read statistics on both disk group failures.
We will change the new ASM_PREFRERRED_READ_FAILURE_GROUP parameter
We will load the system and check the I/O calls via V$ASM_DISK_IOSTAT view and via V$ASM_DISK (GV$ASM_DISK in the RAC configuration).
We will make some disk read statistics on both disk group failures after parameter change.
1
2
3
4
5
6
7
| SQL> create diskgroup DATA_RAC NORMAL REDUNDANCY 2 FAILGROUP FG1 DISK '/dev/raw/raw5' 3 FAILGROUP FG2 DISK '/dev/raw/raw6'; Diskgroup created. SQL> |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
| SQL> create tablespace DATA_RAC_READ DATAFILE '+DATA_RAC'; Tablespace created. SQL> create table test_read (NAME varchar2(10)) tablespace DATA_RAC_READ; Table created. SQL> insert into test_read 2 select rownum 3 from dual 4 connect by level <= 2e3; 2000 rows created. SQL> commit; Commit complete. SQL> |
From ASM Instance:
1
2
3
4
5
6
7
8
9
10
| SQL> select FAILGROUP, SUM(READS),SUM(WRITES) from v$ASM_DISK 2 WHERE GROUP_NUMBER=2 3 GROUP BY FAILGROUP; FAILGROUP SUM(READS) SUM(WRITES) ------------------------------ ---------- ----------- FG1 62 824 FG2 22 824 SQL> |
From DB instance:
1
2
3
4
5
| SQL> SELECT COUNT(*) from TEST_READ; COUNT(*) ---------- 22000 |
From ASM instance, note that FG1 has more I/O reads than FG2, but, then the I/O is distributed between FG1 and FG2.
I have seen a ratio approximatively of 1:1 distribution, increasing with the load.
I have seen a ratio approximatively of 1:1 distribution, increasing with the load.
1
2
3
4
5
6
7
8
9
10
| SQL> select FAILGROUP, SUM(READS),SUM(WRITES) from v$ASM_DISK 2 WHERE GROUP_NUMBER=2 3 GROUP BY FAILGROUP; FAILGROUP SUM(READS) SUM(WRITES) ------------------------------ ---------- ----------- FG1 63 867 FG2 23 867 SQL> |
From the ASM Instance, set the ASM_PREFERRED_READ_FAILURE_GROUPS to FG2. Note that the parameter is dynamic, there is no need to bounce, dismount or remount the diskgroup.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
| SQL> ALTER SYSTEM SET ASM_PREFERRED_READ_FAILURE_GROUPS ='DATA_RAC.FG2'; System altered. SQL> select FAILGROUP, NAME, PREFERRED_READ 2 from v$ASM_DISK; FAILGROUP NAME P ------------------------------ ------------------------------ - DISK4 DISK4 U DISK3 DISK3 U DISK2 DISK2 U DISK1 DISK1 U FG2 DATA_RAC_0001 Y FG1 DATA_RAC_0000 N 7 rows selected. SQL> |
Note the value of “Y” in the PREFERRED_READ column of V$ASM_DISK view.
Now from the DB instance, we will make some I/O loads, by for example, running the statements and scripts below for several times;
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
| SQL> drop table test_io purge / create table test_IO (ID VARCHAR2(1000)) TABLESPACE DATA_RAC_READ / insert into test_io select rownum AS ID from dual connect by level <= 2e3; 2 Table dropped. SQL> 2 Table created. SQL> 2 3 4 2000 rows created. SQL> commit; Commit complete. / SQL> declare W_ VARCHAR2(10); BEGIN for i in 1..10000 loop select count(*) into W_ from test_IO WHERE ID=I; end loop; end; / |
During the load, in the ASM instance, we notice more reads on the FG2 preferred failure disk.
ASM Instance:
1
2
3
4
5
6
7
8
9
10
| SQL> select FAILGROUP, SUM(READS),SUM(WRITES) from v$ASM_DISK 2 WHERE GROUP_NUMBER=2 3 GROUP BY FAILGROUP; FAILGROUP SUM(READS) SUM(WRITES) ------------------------------ ---------- ----------- FG1 132 1793 FG2 136 1793 SQL> |
I have noticed a ratio of 1:2 reads between preferred disk FG1 and FG2.
Note we had the following I/O statistics before changing the parameter; of course, as you may noticed my test environment is not a real busy system!
1
2
3
4
5
6
7
8
| SQL> SELECT FAILGROUP, READS, WRITES from v$ASM_DISK_IOSTAT WHERE GROUP_NUMBER = 2 ; 2 3 4 FAILGROUP READS WRITES ------------------------------ ---------- ---------- FG2 1 240 FG1 7 240 |
Now I/O statistics are showing the following values, after changing the parameter; more reads on FG2.
1
2
3
4
5
6
7
8
9
10
11
| SQL> SELECT FAILGROUP, READS, WRITES from v$ASM_DISK_IOSTAT WHERE GROUP_NUMBER = 2 ; 2 3 4 FAILGROUP READS WRITES ------------------------------ ---------- ---------- FG2 44 324 FG1 7 324 SQL> |
No comments:
Post a Comment