E N D
hi since the communication happents in the following way ie sd driver issues scsi commands to the LUNS/Disks which is sent to LPFC driver which further encapsulates it into FC protocols and sends it to the Targets/Luns. we need to set the sd parameters less than or equal to LPFC parameters so that lpfc driver has less/equal number of commands and there is no que full for command retries which do effect the through put. Arun R Gautham --- "Hallbauer, Joerg" <Joerg.Hallbauer@warnerbros.com> wrote: One thing to remember about HDS is that all of the I/O's to a LUSE volume pass through the "head", therefore the transaction rate to a LUSE volume is limited to the max that a single LDEV can handle. That's why for high I/O environments HDS wants to keep you to 10-15 LDEVs per LUSE max. --joerg -----Original Message----- From: John Dong [mailto:johndong@digitalimpact.com] Sent: Monday, March 08, 2004 11:03 AM To: Tony Glenn Griffiths; Jon Hudson; john.o.williams@convergys.com; DAUBIGNE Sebastien - BOR Cc: sunmanagers@sunmanagers.org; veritas-vx@mailman.eng.auburn.edu; veritas-vx-admin@mailman.eng.auburn.edu Subject: RE: [Veritas-vx] Emulex FC : lun-queue-depth and/or sd_max_throttle ? Anyone has run tests with HDS ? We have HDS 9960 and currently sd_max_throttle is set to 8 based algorithm provided by HDS. (basically the same algorithm, 256/number_of_LUNs). The configuration is SUN E6800 with Emulex LP9002C. I noticed the throughput of each port is well below 200MB/sec, it is like only 60MB at peak. I wonder if any one tried to increase sd_max_throttle in HDS environment ? If yes, is there any performance improvement ? Thanks. John -----Original Message----- From: veritas-vx-admin@mailman.eng.auburn.edu [mailto:veritas-vx-admin@mailman.eng.auburn.edu] On Behalf Of Tony Glenn Griffiths Sent: Monday, March 08, 2004 10:39 AM To: 'Jon Hudson'; john.o.williams@convergys.com; DAUBIGNE Sebastien - BOR Cc: sunmanagers@sunmanagers.org; veritas-vx@mailman.eng.auburn.edu; veritas-vx-admin@mailman.eng.auburn.edu Subject: RE: [Veritas-vx] Emulex FC : lun-queue-depth and/or sd_max_throttle ? Agreed, is v.interesting. On my Sol 2.9 box, adb shows sd_max_throttle as 256. I have no entries in /etc/system. -----Original Message----- From: veritas-vx-admin@mailman.eng.auburn.edu [mailto:veritas-vx-admin@mailman.eng.auburn.edu] Sent: 08 March 2004 17:36 To: john.o.williams@convergys.com; DAUBIGNE Sebastien - BOR Cc: sunmanagers@sunmanagers.org; veritas-vx@mailman.eng.auburn.edu; veritas-vx-admin@mailman.eng.auburn.edu Subject: RE: [Veritas-vx] Emulex FC : lun-queue-depth and/or sd_max_throttle ? This is REALLY interesting. It was my understanding that lun-queue-depth deferred to sd_max_throttle. This also explained why EMC asks that sd_max_throttle==lun-queue-depth. However, Sebastien's test would indicate that sd_throttle_max gets "considered" first, so that you can just use lun-queue-depth as the catch-all. Now moving on to what John said, does anyone know for _sure_ what the default sd_max_throttle setting is on solaris? Is it really 256 or something lower? I know 256 is the max possible, but maybe no the default? And if sd_throttle_max behaves as stated here, why do storage companies have you set it at all? Why not just let the HBA take it and be done with it? I've put in calls to both EMC and Emulex and have yet to talk to anyone who _really_ understood what I was talking about. EMC just kept saying "our recommended default is 20" grr. Now while I agree that the queue is really only being dealt with at the target level I would personally be a little worried about just using tgt-queue-depth as my sole variable. The only reason I say this (and I mean no offense) is that I have data from tests that show while the queue is managed per target, the performance impact is per lun. for example. take two systems with identical config. Set one per lun queue depth to 2, set the other to 24. Have them both hit the same config disk in a test. The system set to 2, with throttle back it's IO to something in the range of 150-175IOPS, where the host with it set to 24 will reach closer to 400IOPS. However, set above 32, you will start seeing negative results as well. What happens above 32-34 is you get no performance increase, but your response time sky-rockets. So basically your optimal range (if you believe this test) is 20-28, 32 on the outside. (the tests were done with EMC, so take that in consideration) So if you set by target and not by lun, and you set the limit too high you can present issues. In John's case below, if you set it to 512, but only had 16 luns, you would throttle them all to 32 IF they were all under load at the same time But, if you were to just hit one lun really hard, and no one else (lun-queue-depth, sd_throttle_max) are there to stop you, you could easier go above 32 and cause a performance penalty. I'm sorry for introducing more questions instead of giving answers, but the more I look into this, the less sense the "vendors" make. It frusterates me that settings that can have SOOO much of a performance impact, are explained and set up so poorly by vendors. -----Original Message----- From: john.o.williams@convergys.com [mailto:john.o.williams@convergys.com] Sent: Mon 3/8/2004 7:24 AM To: DAUBIGNE Sebastien - BOR Cc: sunmanagers@sunmanagers.org; 'veritas-vx@mailman.eng.auburn.edu'; veritas-vx-admin@mailman.eng.auburn.edu Subject: Re: [Veritas-vx] Emulex FC : lun-queue-depth and/or sd_max_throttle ? We went through the same exercise with the OS and the Emulex stack. We were benching a new 12K and found that the OS was seriously limiting our I/O through-put due to the lun / target queue restraints. After much testing we saw a serious increase ( ~60% ) in our I/O through-put by allowing the Emulex driver to handle target / lun queues on a per instance basis. In a nut shell, we set all of the per instance settings in the lpfc.conf and left the sd_max_throttle out of the /etc/system. For example: # lpfcNtM-tgt-throttle: the maximum number of outstanding commands to # permit for a FCP target. # By default, target throttle is diabled. # Example: lpfc0t17-tgt-throttle=48; # says that target 17, interface lpfc0 should be allowed # up to 48 simultaneously outstanding commands. lpfc0t21-tgt-throttle=512; lpfc0t22-tgt-throttle=512; lpfc1t21-tgt-throttle=512; lpfc1t22-tgt-throttle=512; In this configuration, we took the tgt-throttle and increased in to our requirements. The lun-queue-depth in this configuration was a moot point as what you are seeing is more of a target restriction rather than a lun restriction. This should address your queue-full conditions. There are a number of assumptions being made here. We are running switched fabric with an EMC back-end. Since the EMC frames we use are fully populated from a cache perspective we rarely run into a cache saturation condition. Regards, John Williams |---------+---------------------------------------> | | DAUBIGNE Sebastien - BOR | | | <sebastien.daubigne@atosorig| | | in.com> | | | Sent by: | | | veritas-vx-admin@mailman.eng| | | .auburn.edu | | | | | | | | | 03/08/2004 08:13 AM | | | | |---------+---------------------------------------> >----------------------------------------------------------------------- ---- ---------------------------------------------------| | | | To: "'veritas-vx@mailman.eng.auburn.edu'" <veritas-vx@mailman.eng.auburn.edu>, sunmanagers@sunmanagers.org | | cc: | | Subject: [Veritas-vx] Emulex FC : lun-queue-depth and/or sd_max_throttle ? | >----------------------------------------------------------------------- ---- ---------------------------------------------------| Hi, We have an IBM Shark (ESS 2105-800) connected via 8 Emulex LP9802 FC-AL adapters (lpfc) to our Solaris 8 server, multipathed via VxVM3.2/DMP. In its HBA configuration guide, IBM recommends to set : - Solaris "sd_max_throttle" to ( 256 / max_number_of_LUN_per_adapter ). - lpfc "lun-queue-depth" to 30 (default value). As I have 16 LUN per adapter, "sd_max_throttle"=16 is less than "lun-queue-depth". "sd_max_throttle" is the maximum number of commands the sd driver will send to a LUN before queuing. "lun-queue-depth" is the maximum number of commands the lpfc driver will send to a LUN before queuing. The lpfc queuing algorithm is more clever than sd one, because it can decrease the active queue depth when queue_full conditions occur, then increase it when there has not been any queue_full since a certain amount of time (all this tunable via dqfull-throttle-up-time/dqfull-throttle-up-inc). As the lpfc driver sends the command after the sd driver and each one has itw own queuing parameter, I wondered if I could just let "sd_max_throttle" to its default value (256), and let lpfc driver manage the queuing. So I made a typical load test, and the winner is sd_max_throttle to the default value (256) : 20% better throughput than the recommended value (16) !! Now the funny thing with theses parameters is that if we set it too high, the ESS may send QUEUE_FULL conditions, making the server retry commands, which would definitely affect throughput. That's why I wonder if anyone did play with both lun-queue-depth/sd_max_throttle, and got good results leaving sd_max_throttle to its default value, letting lpfc manage the queue with its more clever algorithm. Is it safe to limit the active queue size at the lpfc level only, or should I also limit the sd level to make sure not to get queue_full conditions ? Thanks for your feedback. I will summarize. -- Sebastien DAUBIGNE sebastien.daubigne@atosorigin.com - +33(0)5.57.26.56.36 AtosOrigin - Infogerance/ERP/Pessac _______________________________________________ Veritas-vx maillist - Veritas-vx@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx _______________________________________________ Veritas-vx maillist - Veritas-vx@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx _______________________________________________ Veritas-vx maillist - Veritas-vx@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx _______________________________________________ Veritas-vx maillist - Veritas-vx@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx _______________________________________________ Veritas-vx maillist - Veritas-vx@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx _______________________________________________ Veritas-vx maillist - Veritas-vx@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx _____________________________________________________________ Powered by a short email address ... http://www.k.st