360 likes | 949 Views
E N D
1. IBM eServer BladeCenter vs. Dell PowerEdge 1855Product to Product Comparison Mike Easterly, WW ISG Competitive
January 13, 2005
2. 2 Agenda Page Key Selling Points Overall
Market share and Portfolio
Standard and Application Benchmark Comparison
BladeCenter vs. Dell
Portfolio Comparison
Innovative Infrastructures
Availability and Reliability Comparison
Ease of Use Comparison
BladeCenter HS20 vs. PowerEdge 1855
Feature Comparison
Ease of Use
Key Selling Points
3. 3 Help reduced infrastructure costs
Power and Cooling – IBM can help save customers significantly (Gartner Group)
Support for more blades could mean lower cost per blade (switch, chassis, etc.)
Highly available solutions
Redundant midplane - redundant paths for extra protection of communications between blades & modules
Support redundant KVM/Mgmt modules that can provide: higher availability, extreme management capability from virtually anywhere at any time, in addition to helping minimize cable clutter
Chipkill memory technology which delivers memory that is up to 16 times more reliable than the ECC memory.
Enhanced Predictive Failure Analysis (PFA) on blowers, processors and memory and hard drives able to anticipate failures and generate an alert, which can allow IT to quickly replace the components before the failure actual occurs.
Light path diagnostics provides a central information LED panel (visible without removing the cover) and individual LED lights throughout the system on items like memory DIMM’s, PCI slots, VRM’s, power supplies and CPU’s.
Infrastructure simplification
Support for Windows, Linux and AIX or Intel, AMD or Power
Integrated switch offerings:
Ability to spread cost of over 14 blades helps with return on investment
Choose and different technologies (Layer 3-7 and Fibre channel)
BladeCenter Alliance and Open Specifications
Leverage integrated CD & floppy – vs. “custom cables” and shelves
Increase storage utilization
Push hard for integrated SAN connect and Boot from SAN – gives IBM a price advantage
Investment protection and Serviceability
Same chassis since announce no changes required to the mid plane & Financing for up to 5 years
Easy to install and service with screw less design
CD-ROM and floppy that can be shared amongst blades, helping IT easily install software or files locally.
Increased server density
Space savings up to 40-70% better then the competition
4. 4 Market share comparisons
5. 5 Things that make you go hmm….. Dell speaks proudly about its work on the forthcoming blade server. "I designed my own product, and I'm pretty excited about it," said Jeff Clarke, senior vice president of Dell's product group. The systems are derived from Dell's eighth generation servers, which debuted in August, along with Intel's newest "Nocona" version of the Xeon processor, he said.
Below is the Fujitsu-Siemens announced Feb. 2004 compares to Dell PowerEdge 1855 which announced November 2004
6. IBM eServer BladeCenter vs. Dell 1855 A Chassis/ Infrastructure Comparison
7. 7 Investment Protection The IBM BladeCenter Chassis was announced back in Sept. 2002
Customers are able to use the same chassis today
Dell’s first blade was the PowerEdge 1655MC
Announced in 2002
No other blade were supported in the chassis
For awhile Dell talked down the need for blades
In late 2004 Dell decided to come out with a follow on product PowerEdge 1855
Over 2 years late with support for Xeon blades
Entirely new chassis
8. 8 IBM eServer BladeCenter
Has support for multiple processor offerings
Intel Xeon 2-way
Intel Xeon processor MP 4-way
IBM PowerPC based processor 2-way
Customers can mix all IBM blades in the same chassis
Support for 64-bit Operating systems (Linux and AIX)
9. 9 Density Comparison for SCSI Keep in mind IBM offers customers choice:
Leverage max. density through the use of IDE drives, SCSI drives or Boot from SAN
In either of the three scenarios clients can support up to 84 blades per 42U rack
Plus for those situation where a client might want more local drives, clients can simple attach the SCSI storage expansion unit to provide an additional two SCSI drives
While Dell will claim support for up to 60 blades with SCSI HDD, it is important to consider:
Space requirement for Fibre channels switches (often take up 1U each, want 2 for redundancy) which could mean one less Dell blade chassis, resulting in support for only 50 blades. Remember IBM support integrated Fibre Channel switch options.
If customers is sold on having to have hot-swap SCSI
Make sure the customer understands IBM can also offer via SCSI expansion option
Explain investment protection – over time it the customer wants to move to Boot from SAN they can take of the SCSI expansion and get back to the 14 blades per 7U, plus the hot-swap SCSI drives will work on other select xSeries servers.
By using the SCSI expansion until customers have the capability or RAID 1e which our competitors can not do
Talk about the space Dell customers might require for switches, could be down to 50 blades vs. 42
Now leverage all the other BladeCenter selling points
10. 10 Innovative Infrastructure IBM believes that BladeCenter allows for new Infrastructure advantages vs. simply being another server.
BladeCenter is also designed to allow customers to pay for only those capabilities they need. Customers can:
Get extreme density using IDE, SCSI or Boot from SAN
IBM provides multiple Integrated Ethernet Switch Options
Dell only offers the PowerConnect 5316M (6-port)
Support for up to 4 Gigabit Ethernet ports per blade
Dell has talked about future support
Can leverage the advantages of Fibre Channel integrated into BladeCenter
Dell currently only offers pass-through, has talked about future support
Support for Layer 2-7 switch technology integrated into the BladeCenter
Dell currently does not offer this capability
For customers wanting HPC – IBM allows for integration of a Myrinet Card.
Topspin IB Switch Module for IBM
Dell has talked about future support http://www.f5.com/f5products/bigip/BladeController/
White Paper - http://whitepapers.zdnet.co.uk/0,39025945,60076016p-39000690q,00.htm
http://h30094.www3.hp.com/product.asp?sku=2406782&jumpid=ex_r2910_nextagsmb/accessories
http://www.f5.com/f5products/bigip/BladeController/
White Paper - http://whitepapers.zdnet.co.uk/0,39025945,60076016p-39000690q,00.htm
http://h30094.www3.hp.com/product.asp?sku=2406782&jumpid=ex_r2910_nextagsmb/accessories
11. 11 Dell will talk about its 10GB capable backplane Currently I am not aware of many available switches which can take advantage of this capability
Clients might want to ask Dell when they plan to have a 10GB switch?
As new technologies become available vendors will be required to look closely at the power and cooling capabilities of new switch technologies
Those considering Dell should ask if future 10GB switch technologies can be supported in the existing Dell chassis with:
current back plane or will it require changes
current power supplies or will it require changes
12. 12 Cooling Implementation IBM has designed BladeCenter for extreme levels of availability by trying to reduce potential points of failure, in addition to providing technologies to help discover problems before they occur.
To do this IBM BladeCenter was designed with 2 blowers, which include Predictive Failure Analysis, which helps notify IT of failures before they occur.
Dell
Will not notify you until something has actually failed or they notice environmentals out of range
Has a total of 8 fans [two in each fan module (4 total) and one in each power supply (4 total)]
It is common knowledge that components with moving parts are more often subject to failure. http://www.orionfans.com/html/life_expectancy.html
IBM also located processors at the front of the blade in order to get the coolest air first, Dell has talked about issues with cooling blades and their air first passes HDD’s which increases heat and then goes over the processors. This web site says average life is about 65k hours. So quick math tells me 65000/64 fans chassis is a fan dying every 1016 hours or 42 days.
Now think about our Hotswap fans, let's assume same life of 65k hours We have two blowers which means is 65k/2 or a hot swap failure every 32,500 hours which is every 45 MONTHS, compared to their 42 days assuming we are both using ball bearing fans. If HP is using the cheap sleeve bearing fans, and we are using ball bearing fans, it's even worse for HP.
This web site says average life is about 65k hours. So quick math tells me 65000/64 fans chassis is a fan dying every 1016 hours or 42 days.
Now think about our Hotswap fans, let's assume same life of 65k hours We have two blowers which means is 65k/2 or a hot swap failure every 32,500 hours which is every 45 MONTHS, compared to their 42 days assuming we are both using ball bearing fans. If HP is using the cheap sleeve bearing fans, and we are using ball bearing fans, it's even worse for HP.
13. 13 Cooling - Airflow BladeCenter is designed to maintain exceptional cooling even during service
When removing a blade “louvers” fall down to maintain effective cooling and airflow
When removing a Dell blade, there is nothing to help maintain effective cooling and airflow – as a result cool air could get sucked in to the open slot bypassing those installed blades that need cooling
Keep in mind air flows in the direction of least resistance This web site says average life is about 65k hours. So quick math tells me 65000/64 fans chassis is a fan dying every 1016 hours or 42 days.
Now think about our Hotswap fans, let's assume same life of 65k hours We have two blowers which means is 65k/2 or a hot swap failure every 32,500 hours which is every 45 MONTHS, compared to their 42 days assuming we are both using ball bearing fans. If HP is using the cheap sleeve bearing fans, and we are using ball bearing fans, it's even worse for HP.
This web site says average life is about 65k hours. So quick math tells me 65000/64 fans chassis is a fan dying every 1016 hours or 42 days.
Now think about our Hotswap fans, let's assume same life of 65k hours We have two blowers which means is 65k/2 or a hot swap failure every 32,500 hours which is every 45 MONTHS, compared to their 42 days assuming we are both using ball bearing fans. If HP is using the cheap sleeve bearing fans, and we are using ball bearing fans, it's even worse for HP.
14. 14 Power Distribution IBM BladeCenter is Designed with an N + N power
Dell PowerEdge 1855MC is designed with N+1 power
Things for clients to consider
How many UPS power feeds do clients have to their racks?
Often clients will respond with the answer being 2
Dell requires that there be three power supplies to keep running
If the customer connect 2 power supplies to “UPS A” and the other 2 to “UPS B” if one UPS goes down the Dell Chassis does not have the three power supplies to keep running
If Dell brings up “Transfer switches” most IT departments will know about it could be prone to failure in this topology
Dell’s User Guides and Installation trouble shooting guides aloud to Dell moving to an N + N capabilities with new 2100-W power supplies
IBM is not aware of when Dell will have this capability
Those considering Dell should ask:
What chassis / mid plane changes are required to support N + N
Will I need to replace 1200W power supplies with 2100W power supplies in order to get N + N This web site says average life is about 65k hours. So quick math tells me 65000/64 fans chassis is a fan dying every 1016 hours or 42 days.
Now think about our Hotswap fans, let's assume same life of 65k hours We have two blowers which means is 65k/2 or a hot swap failure every 32,500 hours which is every 45 MONTHS, compared to their 42 days assuming we are both using ball bearing fans. If HP is using the cheap sleeve bearing fans, and we are using ball bearing fans, it's even worse for HP.
This web site says average life is about 65k hours. So quick math tells me 65000/64 fans chassis is a fan dying every 1016 hours or 42 days.
Now think about our Hotswap fans, let's assume same life of 65k hours We have two blowers which means is 65k/2 or a hot swap failure every 32,500 hours which is every 45 MONTHS, compared to their 42 days assuming we are both using ball bearing fans. If HP is using the cheap sleeve bearing fans, and we are using ball bearing fans, it's even worse for HP.
15. 15 IBM has designed BladeCenter for extreme levels of availability by:
Helping predict failures before the occur with Predictive Failure Analysis
Including extremely levels of redundancy
Think of the impact of not being able to access the Dell chassis w/ up to 10 blades
Think of the impact of having a midplane issue and up to 10 blades unavailable This web site says average life is about 65k hours. So quick math tells me 65000/64 fans chassis is a fan dying every 1016 hours or 42 days.
Now think about our Hotswap fans, let's assume same life of 65k hours We have two blowers which means is 65k/2 or a hot swap failure every 32,500 hours which is every 45 MONTHS, compared to their 42 days assuming we are both using ball bearing fans. If HP is using the cheap sleeve bearing fans, and we are using ball bearing fans, it's even worse for HP.
This web site says average life is about 65k hours. So quick math tells me 65000/64 fans chassis is a fan dying every 1016 hours or 42 days.
Now think about our Hotswap fans, let's assume same life of 65k hours We have two blowers which means is 65k/2 or a hot swap failure every 32,500 hours which is every 45 MONTHS, compared to their 42 days assuming we are both using ball bearing fans. If HP is using the cheap sleeve bearing fans, and we are using ball bearing fans, it's even worse for HP.
16. 16 IBM has designed BladeCenter for extreme levels of availability which is important when you have 14 blade servers supported through a midplane. In order to do this BladeCenter was designed with midplane which allows for two communication paths between blades and the BladeCenter chassis modules (switch and management) helping reduce single points of failure.
By having this dual path if something were to happen to one path the other keeps that communication and chassis up and running.
This will allow customers to schedule down time when it will least impact the business.
Consider the backplane as passive because there is no interface or configuration of it.
Dell on the other hand
If something goes wrong with the backplane there is no alternative path to communicate with its switch or pass-through modules.
The result is up to 10 Dell blade servers being unavailable to the network, which could impact the customers business Look at the Midplane’s This web site says average life is about 65k hours. So quick math tells me 65000/64 fans chassis is a fan dying every 1016 hours or 42 days.
Now think about our Hotswap fans, let's assume same life of 65k hours We have two blowers which means is 65k/2 or a hot swap failure every 32,500 hours which is every 45 MONTHS, compared to their 42 days assuming we are both using ball bearing fans. If HP is using the cheap sleeve bearing fans, and we are using ball bearing fans, it's even worse for HP.
This web site says average life is about 65k hours. So quick math tells me 65000/64 fans chassis is a fan dying every 1016 hours or 42 days.
Now think about our Hotswap fans, let's assume same life of 65k hours We have two blowers which means is 65k/2 or a hot swap failure every 32,500 hours which is every 45 MONTHS, compared to their 42 days assuming we are both using ball bearing fans. If HP is using the cheap sleeve bearing fans, and we are using ball bearing fans, it's even worse for HP.
17. 17 Electrical Requirements IBM eServer BladeCenter is designed to support dual-processor blade server which are efficient in electricity consumption relative to the amount of processing power it delivers.
When comparing the BladeCenter HS20 to traditional rack optimized servers the, customers could save up to 44% by using IBM BladeCenter*.
Dell claims their 1855 blade support 13% less power then a 1U traditional server** **** Calculation:
Power Costs= [ 0.08 (rate) x watts x 8760 (hours per year 7 x24)] / 1000 (Watts/KW)
Cooling Costs = [ BTU per Hour / 12000 ] x 1.5 (Killowatts) x 8670 (hourse per year 7x24) x 0.08 (cost Killowatts per hour)
**** Calculation:
Power Costs= [ 0.08 (rate) x watts x 8760 (hours per year 7 x24)] / 1000 (Watts/KW)
Cooling Costs = [ BTU per Hour / 12000 ] x 1.5 (Killowatts) x 8670 (hourse per year 7x24) x 0.08 (cost Killowatts per hour)
18. 18 Ease of deployment and Serviceability BladeCenter is designed to give customers choice
With CD and Floppy standard, these can be shared amongst blades for those wanting to work locally with the blades.
Or the ability to do virtually through IBM Director and the Management Module, for those who want to work remotely using virtual media.
Dell requires the use of a “custom cable” to work locally
Requires a custom cables (front dongle cable) – with one end plugging into the blade, which then split off to provide 2 USB and a video connection. $89*
Dell Shelf for setting the USB CD or Floppy
Must be mounted to the rack/chassis
When installed blades can not be removed or installed.
Also Dell has a second custom cable connects to the KVM to provide two PS/2 connections and a video connection. The cables are not interchangeable.
Fujitsu – USB connection Figure 30 on page 68 of blade guidesFujitsu – USB connection Figure 30 on page 68 of blade guides
19. 19 IBM BladeCenter’s design helps reduce/minimize downtime
High availability features IBM offers
Enhanced predictive failure analysis help if readings reach pre-defined thresholds they can trigger an alert that would allow for proactive action (ahead of actually experiencing a failure) to replace the component.
Realtime Diagnostics can perform against a server (or chassis) without requiring a maintenance outage.
Third generation Light path diagnostics module includes it's own power source - allowing the service technician to pinpoint the failure.
IBM BladeCenter has a service-friendly tool less modular design
Dell’s modular design requires screw in some instances to remove parts
KVM module – has 1 screw that secures the release lever to the module
Module cage – 4 screws secure the modules cage to the chassis frame
Midplane – access to the mid plane is not easy and uses two types of screws, after we took it apart we had trouble getting two of the screws back in
Server module control panel – has 2 screws to secure it
Processor - 4 screws/CPU secure the heat sink to the server module board.
NOTE: When removing the heat sink, the possibility exists that the processor might adhere to the heat sink and be removed from the socket. It is recommended that you remove the heat sink while the processor is still warm. Servicing a Blade Server
20. 20 Servicing a Blade Server IBM blades are designed to slide in only one way
Dell’s can accidentally be slid in the wrong way
Notice in product guide: “it is possible to insert the server module upside-down, which may damage the chassis midplane and the server module.”
To help protect Dell has installed a pin that is designed to pop out if the blade is inserted upside down
21. 21 Ethernet Switch IBM vs. Dell IBM offers customers choice, both in switch modules and NIC’s per blade server.
Switch Modules
IBM offers 2 Gigabit Ethernet Switch Modules (Cisco being one), plus a Nortel Networks® Layer 2-7. BladeCenter can support up to 4 switch modules per chassis.
Dell only supports the PowerConnect 5316M (6-port)
For those wanting 2 NIC’s (keep in mind blades per chassis 14 vs. 10)
For those customers wanting more then 2 NIC’s
IBM BladeCenter dual Gigabit Ethernet expansion card $249/blade (provide 4 NIC’s).
Dell does not support any more then 2 at this time, but mentioned in its user guide “Gb Ethernet daughter card (when available)” IBM sales history indicates that only a small percentage of customers require more than 2 NIC’s (approximately 5%)
283192-B21 ProLiant BL p-Class C-GbE2 Interconnect Kit (with 12 RJ-45 10/100/1000 T/TX/T external ports) [Add $4,399.00]
IBM sales history indicates that only a small percentage of customers require more than 2 NIC’s (approximately 5%)
283192-B21 ProLiant BL p-Class C-GbE2 Interconnect Kit (with 12 RJ-45 10/100/1000 T/TX/T external ports) [Add $4,399.00]
22. 22 Fibre Channel Switch Support IBM offers the IBM BladeCenter 2-port Fibre Channel Switch Module $13,999 and the Brocade® SAN Switch Module starting at $14,999/$18,999.
Fibre Channel Expansion Card which costs $750*
Dell only offers a Pass-Through Fibre Channel module which requires:
Each blade to have a Fibre Channel PCI Host Bus Adapter $499*
Two Pass-Through Fibre Channel modules (required for FC connectivity) $1,199*
Two Fibre Channel cables ($79.95 each) Dell Part# A0075302
Two HP StorageWorks SAN switch 2/16N FF (redundancy)16 ports - $16,500*
Customers can save valuable rack space, plus reduce cable cost & cable clutter
External Fibre Channel switches can take up to 1 or 2U of racks space each keep in mind that redundancy requires switches in pairs
Using the 2-port Fibre Channel Switch Module for BladeCenter (4 cables per chassis or 24 per 84 servers, while Dell would 20 cables per chassis we are talking about 168 cables per 84 servers). Using $79.95/each and looking at support for 84 blades, Dell requires up to an additional 168 cables costs $13,431.60. http://www.f5.com/f5products/bigip/BladeController/
White Paper - http://whitepapers.zdnet.co.uk/0,39025945,60076016p-39000690q,00.htm
http://h30094.www3.hp.com/product.asp?sku=2406782&jumpid=ex_r2910_nextagsmb/accessories
IBM
HS20 $13,999x2=$27,998 divided by 14 blades = $1,999 + card $750 http://h71016.www7.hp.com/dstore/MiddleFrame.asp?page=config&ProductLineId=450&FamilyId=823&BaseId=8418&oi=E9CED&BEID=19701&SBLID=&AirTime=False
Dell =
(2x$1,199)/10 = $240 (Pass-through)
($16,500x2)/16 (two chassis) = $2,062 (HP StorageWorks SAN switch 2/16N FF)
$499 Each blade to have Dual port Fibre Channel Mezzanine Card
(20x$80)/10=$160
http://www.f5.com/f5products/bigip/BladeController/
White Paper - http://whitepapers.zdnet.co.uk/0,39025945,60076016p-39000690q,00.htm
http://h30094.www3.hp.com/product.asp?sku=2406782&jumpid=ex_r2910_nextagsmb/accessories
IBM
HS20 $13,999x2=$27,998 divided by 14 blades = $1,999 + card $750 http://h71016.www7.hp.com/dstore/MiddleFrame.asp?page=config&ProductLineId=450&FamilyId=823&BaseId=8418&oi=E9CED&BEID=19701&SBLID=&AirTime=False
Dell =
(2x$1,199)/10 = $240 (Pass-through)
($16,500x2)/16 (two chassis) = $2,062 (HP StorageWorks SAN switch 2/16N FF)
$499 Each blade to have Dual port Fibre Channel Mezzanine Card
(20x$80)/10=$160
23. 23 Other personal thoughts Dell's has essentially validated Quanta design, and therefore they might encourage more OEM's to build modules for the Dell/Fujitsu blade design as and attempt to combat the IBM/Intel blade design open specification efforts
Will talk a lot with customers about future switching technologies
Brocade switch in 1Q 2005
Dell will likely emphasized better reliability with no fans or power supplies internal to its switch - we should be emphasizing the same thing.
Experience with Blades
IBM has many installed happy customers
All blades aren't created equal
Many 1U competitive customers have made the transition
IBM has 2+ years experience working with the server/networking/storage camps
Dell is not a technology company example they said “... Cooling blades isn't easy.....”.
Dell will hype OpenManage tool, but Altiris appears to be critical, make sure that the additional cost of the Altiris suite is included in any TCO analysis.
24. 24 Odds and ends from Dell Product Literature At present, the QLogic SANblade Manager application does not support the Dell 2342M Integrated Fibre Channel Module daughter card for the PowerEdge 1855. When complete, drivers will be available at support.dell.com.
But the appears to have a work around.
High availability clustering with Microsoft Cluster Server (MSCS) is not supported. Support for this feature is planned for Q1 2005.
The back panel of the 1200-W power supply modules protrude approximately one-half inch from the back of the chassis.
KVM switch module does not offer graphical redirect consol, customer will need to get the KVM switch module with KVM-over-IP interface.
IBM offers Remote Control over network.
IBM provides rack rails standard, while Dell requires separate purchase.
25. IBM eServer BladeCenter HS20 vs. Dell PowerEdge 1855A 2-way blade Comparison
26. 26 BladeCenter HS20 vs. Dell PowerEdge 1855
27. 27 BladeCenter HS20 vs. Dell PowerEdge 1855 BladeCenter Selling Points
Performance
Support for up to 4 NIC’s per blade, while Dell can only support up to 2
Flexibility/Economics
Support for 40-68% more blades per enclosure
Support for internal FC switch module, while the PowerEdge 1855 only supports pass-through. This can significantly help reduce cable clutter (IBM zero cables from blades to switch vs. vs. Dell’s 2 per blade)
Support for customer choice for Ethernet and Fibre Channel switch technologies
Investment protection with support for Level 3-7 switch
Availability and Reliability Features
Chipkill memory
Redundant midplane standard
Enhanced Predictive Failure Analysis®
Manageability and Serviceability Features
IBM Director Advanced Systems Mgmt software
Light path diagnostics vs. limited system LED functions
IBM has integrated CD and floppy support
Dell Selling Point
Support for 6 memory DIMMS, while the HS20 support 4 memory DIMMs
Support for two hot-swap SCSI drives standard - many IBM clients will find fixed SCSI sufficient, if not IBM has optional hot-swap SCSI expansion unit.
28. IBM eServer BladeCenter JS20 vs. Dell PowerEdge 1855A 2-way blade Comparison
29. 29 BladeCenter JS20 vs. Dell PowerEdge 1855
30. 30 BladeCenter JS20 vs. Dell PowerEdge 1855 BladeCenter Selling Points
Performance
Support for up to 4 NIC’s per blade, while Dell can only support up to 2
Flexibility/Economics
Support for AIX
Support for 40-68% more blades per enclosure
Support for internal FC switch module, while the PowerEdge 1855 only supports pass-through. This can significantly help reduce cable clutter (IBM zero cables from blades to switch vs. vs. Dell’s 2 per blade)
Support for customer choice for Ethernet and Fibre Channel switch technologies
Investment protection with support for Level 3-7 switch
Availability and Reliability Features
Chipkill memory
Redundant midplane standard
Enhanced Predictive Failure Analysis®
Manageability and Serviceability Features
IBM Director Advanced Systems Mgmt software
Light path diagnostics vs. limited system LED functions
IBM has integrated CD and floppy support
Dell Selling Point
Support for 6 memory DIMMS, while the JS20 support 4 memory DIMMs
Support for two hot-swap SCSI drives standard - many IBM clients will find fixed SCSI sufficient, if not IBM has optional hot-swap SCSI expansion unit.
31. 31 Footnotes (c) 2004 IBM Corp. All rights reserved.
Visit www.ibm.com/pc/safecomputing periodically for the latest information on safe and effective computing. Warranty Information: For a copy of applicable product warranties, write to: Warranty Information, P.O. Box 12195, RTP, NC 27709, Attn: Dept. JDJA/B203. IBM makes no representation or warranty regarding third-party products or services.
Telephone support may be subject to additional charges. For onsite labor, IBM will attempt to diagnose and resolve the problem remotely before sending a technician.
IBM makes no representation or warranty regarding third-party products or services including those designated as ServerProven or ClusterProven.
All offers subject to availability. IBM reserves the right to alter product offerings and specifications at any time without notice. IBM is not responsible for photographic or typographic errors.
This publication was developed for products and services offered in the United States. IBM may not offer the products, services or features discussed in this document in other countries. Information is subject to change without notice. Consult your local IBM representative for information on offerings available in your area.
All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Contact your local IBM office or IBM authorized reseller for the full text of a specific Statement of General Direction.
The examples given in this paper are hypothetical examples of how a customer can use the products described herein and examples of potential cost or efficiency savings are not based on any actual case study. There is no guarantee of comparable results. Many factors determine the sizing requirements and performance of a systems architecture. IBM assumes no liability for the methodology used for determining the configurations recommended in this document nor for the results it provides. Any performance data contained in this presentation was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements quoted in this presentation may have been made on development-level systems. There is no guarantee these measurements will be the same on generally-available systems. Some measurements quoted in this presentation may have been estimated through extrapolation. Actual results may vary. Users of this presentation should verify the applicable data for their specific environment.
32. 32
33. 33
34. 34 Competitive Information Fujitsu-Siemens information was taken from:
Their web sites
Their “PRIMERGY BX600 Blade Server System - Operating Manual”
Dell Information was taken from:
Their web site
Their support documents – Service manuals and Users Guides
Also pricing information was used from Ideas International as of May 6, 2004.