1 / 11

4720 24 drives Seq. access test

4720 24 drives Seq. access test. Targeting Video post production use Focusing on sequential access performance Compare R50, host 2 stripe LVM * 2 and 4 stripe LVM DH4720 can hold 24 * SFF(2.5 ”) drives in 2U height chassis. Good form factor if requirement met. Windows 2008 R2 environment

galvin
Download Presentation

4720 24 drives Seq. access test

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 472024drives Seq. access test • Targeting Video post production use • Focusing on sequential access performance • Compare R50, host 2 stripeLVM *2and 4 stripeLVM • DH4720 can hold 24 * SFF(2.5”)drives in 2U height chassis. Good form factor if requirement met. • Windows 2008 R2 environment • Better to utilize both of A/B controllers (not single big R50). Vdisks should be owned by each. • Need to utilize LVM/Stripeon host • Even with 24 drives, should use 8Gb*4 connections • Will see bottle neck with two connections earlier • When need single, best performing stream, configure 4 stripeLVM • Total capacity is 600GB * 24 -parity~=11TB • If should be able to use with more WS or users ? It will depends on how their work/behavior overlaps. • Should test NLSAS(1TB)SFFdrives also Dot Hill Systems

  2. 24drv bmkconfig1(storagestripe) DellR610 Windows 2008R2 QLE2564 HBA Iometer 2010 Windows can use directly assigned 12TB*1 volume, with 1 or 2 paths (can be max 4 path when needed) 12TB *1vol 4720 and 4824 24 drives FC-DAS 8Gb * 2 1or 2 path 600GB 10K *6 600GB 10K *6 FRUKA19 (shorty v2 chassis) KF64 (SFF 600GB 10K SAS) HitachHUC1060* x 24 600GB 10K *6 600GB 10K *6 (One * RAID50) 6drive sub R5 x 4 stripes All owned by CUA Chunk size =256k 4824 (CUA only – still redundant) FRUKC54 4* 16G FC,Linear LX, 4GB SandyBridge 1.3GHz SC ASIC (Mangy Moose) 4720 (CUA only – still redundant) FRUKC50 4* 8G FC,Linear FX, 4GB Arrandale 1.8GHz SC Vertex-6 FPGA (Jerboa) Dot Hill Systems

  3. 24 drive RAID50 • All are single controller mapping (max two / ctl) • If having more disks/JBOD, can have twice or more total throughput • 1 port configuration restricted with 8Gbps B/W • Especially on read side • Already seeing sweet spot @256KB IO size • If QD=2 gives best performance, general application can achieve it • If IO block size can be tuned Dot Hill Systems

  4. Configuration2. 2d-LVM*2 3TB R5 3TB R5 DellR610 Windows 2008R2 QLE2564 HBA IOmeter 2010 Create striped volumes using two volumes from same controller (2* workers). Create200GBIOBW.tstfiles on NTFS 3TB R5 3TB R5 4720 and 4824 24 drives 1 chassis FC-DAS 8Gb*2 or 4 1 or 2 vd / port 600GB 10K SAS *6 600GB 10K SAS *6 FRUKA19 (shorty v2 chassis) KF64 (SFF 600GB 10K SAS) HitachHUC1060* x 24 600GB 10K SAS *6 600GB 10K SAS *6 Two * RAID5(A-side) Two * RAID5(B-side) Chunk 512KB 4824 (redundant) FRUKC54 4* 16G FC,Linear LX, 4GB SandyBridge 1.3GHz SC ASIC (Mangy Moose) 4720 (redundant) FRUKC50 4* 8G FC,Linear FX, 4GB Arrandale 1.8GHz SC Vertex-6 FPGA (Jerboa) Dot Hill Systems

  5. R50vs.Host2d-LVM • R50 case is single LUN from storage. Using single controller, two ports. • Difference b/w 2 portsand 4 ports are obvious • LVM *2configuration uses both A/B controllers, 4 ports • One volume performs half of these • Good to utilize more total performance and throughput Dot Hill Systems

  6. Configuration3. 4d-LVM 3TB R5 3TB R5 DellR610 Windows 2008R2 QLE2564 HBA IOmeter 2010 Created host based 4 stripe LVM. Create100GBIOBW.tstfile on NTFS 3TB R5 3TB R5 FC-DAS 8Gb* 2 or 4 1 or 2 vd / port 600GB 10K SAS *6 600GB 10K SAS *6 FRUKA19 (shorty v2 chassis) KF64 (SFF 600GB 10K SAS) HitachHUC1060* x 24 600GB 10K SAS *6 600GB 10K SAS *6 4720 24 drives 1 chassis Two * RAID5(A-side) Two * RAID5(B-side) Chunk 512KB 4720 (redundant) FRUKC50 4* 8G FC,Linear FX, 4GB Arrandale 1.8GHz SC Vertex-6 FPGA (Jerboa) Dot Hill Systems

  7. 2d-LVM*2vs.4d-LVM • @128KB block • 4sLVMread:1GB/s、Wrt:~700MB/s • With 2 volume parallel Read1.5GB/s, Wrt 1GB/s ttal • If can utilize QD=8 • (depends on AP) • Good for Readperformance • Even for 16~64KB IO size • 256KB block on 4sLVM is almost sweet spot Dot Hill Systems

  8. If only QD=1is achievable • To achieve 1GB/s Read, need256KBI/O size • 1GB/s Writeneed 1MB I/O size Dot Hill Systems

  9. WindowsMPIOover-head • Multiple paths / volume doesn’t give best • 1optimumpath: 1path/LUNmapping • 2 optimum path: 4 paths/LUN mapping, two from owner controller • User can notice this difference. Should use MPIO only for HA purpose. • 1 path each from A/B controller, 1 preferred, 1 un-pref./LUN Dot Hill Systems

  10. Ref.1)Linux vs. Windows Linux (Left) vs.Windows(right) • LinuxXFS seems using more aggressive read cache • NTFSsector size: Default used (8KB?) Dot Hill Systems

  11. Ref.2)4720vs.482424drvSeq. • No diference with right load • Okay to use 4720 • 4824 shows better readthroughput • Same for write • 4720 catalog spec(48drive) • Read5,200MB/s1) • Write3,000MB/s 1)Bigger than 6 port * 8Gbps • RandomIOPS • 90K-IOPS@8k • 32K-IOPS@8k Dot Hill Systems

More Related