Tuesday, April 14, 2015

Disk overhead Exadata X2

I had a discussion recently about the available storage from the Exadata storage cells in this case it was an X2. Please note the disks are advertised as 2TB in size however there is a small overhead at each stage from the physicaldisk layer then to the celldisk layer which I show in this posting.


From the below example we can see the physical disk size is 2TB from the makemodel property but the physicalsize is about 1862.66GB this is a drop of 185.34GB from 2048G.


CellCLI> list physicaldisk 20:0 detail
         name:                   20:0
         deviceId:               19
         diskType:               HardDisk
         enclosureDeviceId:      20
         errMediaCount:          0
         errOtherCount:          0
         foreignState:           false
         luns:                   0_0
         makeModel:              "SEAGATE ST32000SSSUN2.0T"
         physicalFirmware:       061A
         physicalInsertTime:     xxxxxxxxxxxxxxxxxxx
         physicalInterface:      sas
         physicalSerial:         xxxxxxxxxxxxxxxxxxx
         physicalSize:           1862.6559999994934G
         slotNumber:             0
         status:                 normal



We can see from the celldisk level the size is being reported as about 1832.59GB another drop of about 30GB.

CellCLI> list celldisk CD_00_cel01 detail
         name:                   CD_00_cel01
         comment:
         creationTime:           xxxxxxxxxxxxxxxxxxx
         deviceName:             /dev/sda
         devicePartition:        /dev/sda3
         diskType:               HardDisk
         errorCount:             0
         freeSpace:              0
         id:                     xxxxxxxxxxxxxxxxxxx
         interleaving:           none
         lun:                    0_0
         physicalDisk:           xxxxxxxxxxxxxxxxxxx
         raidLevel:              0
         size:                   1832.59375G
         status:                 normal


cel01: size:                 1832.59375G
cel01: size:                 1832.59375G
cel01: size:                 1861.703125G
cel01: size:                 1861.703125G
cel01: size:                 1861.703125G
cel01: size:                 1861.703125G
cel01: size:                 1861.703125G
cel01: size:                 1861.703125G
cel01: size:                 1861.703125G
cel01: size:                 1861.703125G
cel01: size:                 1861.703125G
cel01: size:                 1861.703125G

Finally we can see with the overhead at each level starting from a 2TB physical disk down to 1832GB of usable space to be used in ASM before we add the disk to a diskgroup with a NORMAL or HIGH redundancy level which will reduce the available space even further with ASM mirroring. We get about 89.5% of usable disk storage for each 2TB disk for an overhead of 10.5% which only applies to the first and second disk for overhead in the cell storage node. The remaining celldisks have 1861.7GB disks which is small overhead of about 1GB. 

One more item to note the grid disk size will match the size reported in v$asm_disk since the grid disks are presented in ASM.


CellCLI> list griddisk DATA_CD_00_cel01 detail
         name:                   DATA_CD_00_cel01
         asmDiskgroupName:       DATA
         asmDiskName:            DATA_CD_00_CEL01
         asmFailGroupName:       CEL01
         availableTo:
         cachingPolicy:          default
         cellDisk:               CD_00_cel01
         comment:
         creationTime:          
xxxxxxxxxxxxxxxxxxx          
         diskType:               HardDisk
         errorCount:             0
         id:                
    xxxxxxxxxxx                     
         offset:                 32M
         size:                   1466G
         status:                 active



select name, TOTAL_MB/1024 from  v$asm_disk ;


NAME                           TOTAL_MB/1024
------------------------------ -------------

...
DATA_CD_01_CEL01                    1466
....















 

Please keep this in mind when doing sizing for diskgroups and future capacity planning in setting up your Exadata storage.

No comments:

Post a Comment