![]() Server B : RAID 1/5 using SSDs (boot drive) with RAID 6 using some HDD or SSD Server A : RAID 6 using 12x SAS HDD via MDADM (software RAID) I will start a new reply for some network suggestions to look at. Assuming this 500 MB/sec is not the issue, network is next. We haven't talked about networking yet obviously. I'm not sure if this 500 MB/sec number you are getting is something you were expecting or if that's the problem. If you want more speed out of SSD, NVMe or a 12 Gbps interface are your next options. Take into account overheads and such, 500 MB/sec on a SATA III drive isn't a bad number and I'd happily accept that. SATA II is limited to 3 Gbps or roughly 300 MB/sec and SATA III is double that. What kind of SSD drive is this? How is it connected? If it's a SATA based SSD drive then what you are seeing is on par with expectations. I assume sdv is your SSD drive, and the other drives with activity (sdf through sdp) are your RAID6 array? This array is only utilized 30% during reading and 50% during writing. It would make sense it's your RAID6 array but I want to make sure I'm not making a bad assumption.ĭuring iostat you have device sdv maxing out at near 100% utilization and 500 MB/sec. Let me try to summarize what I see.įirst off, what is /mnt/md0? Is that the RAID6 array? Bonnie is reporting you're getting about 530 MB/sec write speed (543526 K/sec sequential block output) and on the rewrite a bit lower at roughly 300 MB/sec. I used iPerf and it reported 9.8Gbps between the PCs. I'm also including screenshots with iostat data as I was copying to/from the RAID 6 array. If you can read it, there's the bonnie output. Version 1.97 -Sequential Create-Random Create. Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP Version 1.97 -Sequential Output-Sequential Input-Random-Ĭoncurrency 1 -Per Chr-Block-Rewrite-Per Chr-Block-Seeks. Sure it doesn't negatively impact the 1Gbps NIC.BTW, I tried this test with NFS and Samba, no difference. Through the switch and get on the internet and talk to the LAN so inĬase I need to make some universal kernel boot change, I need to make Operating systems and Mint isn't supported.Ĭase it matters, both systems are using their 1Gbps NICs to route out Installer is hard-coded to look for only "officially" supported I can't apply the absolute latest driver because Mellanox's Roughly 70MB/s! I restarted both boxes and the MTU is back to 1500 andĬards have the latest firmware and are using v4.00 of the driver (Mintĭefault). Speeds around 350MB/s and then every test copy after that. (from 1500 to 9000) in both systems and the first file copy test got me I did some digging aroundĪnd found references to adjusting the MTU. But when copying over theġ0Gbps link, I'm only seeing about 300+MB/s. This point, the sub-system in both servers is capable of maxing out the ![]() In this box to/from the SSD, seeing 500+MB/s. It has a bunch of drives in RAID 5 (MDADM, 9207-8i HBA)Īnd a SSD for booting. RAID 6 array or from the SSD to the RAID 6 array, I'm getting a hair When I copy a 10 gig file from the SSD to the The backplane in the SuperMicro is SAS2 and the SFF cable just the DAC cable.Ī is a 24 bay, SuperMicro box. Have two boxes (running Mint 19.2 on both) using 10Gbps Mellanox X3Ĭards connected via a DAC cable and iPerf reports 9.8Gbps both ways.
0 Comments
Leave a Reply. |