160 likes | 276 Views
Appro Products and Solutions. Appro , Premier Provider of Scalable Supercomputing Solutions: 4/16/09. Anthony Kenisky, Vice President of Sales. Competitive Advantages. :: Technology Differentiation. Network Communications Performance.
E N D
Appro Products and Solutions Appro , Premier Provider of Scalable Supercomputing Solutions: 4/16/09 Anthony Kenisky, Vice President of Sales
Competitive Advantages :: Technology Differentiation Network Communications Performance Scalable dual-rail, 3D Torus Topology, providing superior network communications performance Diskless Operation with Std Linux OS support No need for LWK (Light-Weight-Kernel) which allows system to run a wider range of commercial applications Complete Cluster Management ACE™ Cluster Management Software to simplify management of large, complex systems Packaging Directed Airflow, an innovative air cooling system architecture which supports very dense systems and lowers facility cooling costs Dynamic Virtual Cluster Operation System can be partitioned to support multiple operating environments to provide flexibility in 3rd party applications support
Appro Product Portfolio :: Diverse Product Line Xtreme™ SuperComputer HyperClusters™ GreenBlade™ System Performance and availability for Large-Sized Deployments Flexibility and Choice for Medium to Large-Sized Deployments Modular Solution for Small to Mid-Sized Deployments
Product Portfolio :: Feature Summary • Appro Xtreme-X™ Supercomputer • Balanced Architecture (CPU/Memory/Network) • Scalability (SW & Network) • Reliability (Real RAS: Network, Node, SW) • Facilities (30% less Space, 28% less Power & Cooling) • Appro GreenBlade™ System • Price/Performance/Value • N+1 90%+ Efficient Power design • Hot swap blades w/Local Storage option • Density Advantage 10 nodes/80 core per system • Appro HyperClusters™ • Cluster based on GreenBlade systems or rack-mount servers • Choice of CPU’s or GPU’s • Standard 19” rack support • Price/Performance per watt, reliability and flexibility
Xtreme-X™ Supercomputer :: Performance and Availability • Power/Cooling efficient solution with lower TCO • 512 cores based nodes/Rack • Peak Performance/Rack: 6.1TF • Memory Capacity/Rack:1.5TB • Dual Rail Management Network • Efficient Cooling Architecture : 28.8KW/Rack • Redundant PSU 90% Efficiency, 5+1, PFC • Redundant Fan: Up to 460W power/Node • Appro Cluster Engine™ Management Software • Complete package with improved (RAS) • Ideal for large deployments, > 1000 nodes • Scales up to 1000TF of computing power 36-portQDR Infiniband Switch Management Nettowk GbE Switch (Dual Rail) 16Compute nodes’Subrack
Xtreme-X™ Supercomputer :: Directed Airflow Cooling Methodology Top View –Datacenter Floor Space • Improved cooling efficiency increasing computing capacity up to 28% with the same amount of cooling • 100% of AC flow is forced through the entire system increasing system cooling efficiency • Reduced total cost of ownership and increased computing capacity
Xtreme-X™ Supercomputer :: Recent Design Deployment Renault F1 Racing Team Delivered 38TFs Xtreme-X Supercomputer for Virtual Wind tunnel project Customer Quote “Appro not only offered us a cost effective solution but they also improved our required technical specification through better reliability, greater fault tolerance and redundancy as well as more flexibility with regards to system scalability.”Bob Bell,Technical Director, ING Renault F1 Team
GreenBlade™System Green Architecture Power-optimized design resulting in reduction of power consumption of platform cooling sub system Increased Density Double the density of standard 1U servers Improved R/A/S Shared N+1 power design Shared cooling (3+3) design Hot-swappable blades Hot-swappable power supplies Hot-swappable cooling fans :: HPC Building Block Solution Front View Rear View PAGE |8
GreenBlade™System :: Benefits vs. standard 1U servers • Three-year Savings in Power Cost per Server vs. GreenBlade™ • To adequately cool a compact 1U server, typical cooling system of 1U platform is designed as follows: • Standard 1U uses up to six compact fans to cool one system • Popular Twin 1U uses six compact fans to cool two systems • On the other hand, GreenBlade™ uses six larger(120mm), more reliable fans to cool up to 10 systems • Example below translates to lower TCO for a GreenBlade™ solution over competition * Power cost calculated at $0.13/kWh as per Dept of Energy Commercial power cost national estimate report in 2007 ** Cooling power requirement estimated at 60% of total power supplied
New - Appro HyperClusters™ :: Latest Processor and GPU Technologies Appro HyperPower™ ClusterGPU, Performance Optimized Clusters Appro HyperGreen™ ClusterPower-Efficient, based on GreenBlade System Coming May/2009 • Based on an open architecture designed for medium to large deployments • Delivers performance, scalability and greater flexibility in a dense cluster architecture • Offers the latest processor and GPU computing technologies • Open cluster management options in an economical and power efficient compute platform • Choice of server, networking and software with a variety of configuration options • Tested and pre-integrated solution deployed as a complete package • HPC professional services and support available
HyperGreen™ Cluster :: Maximum Rack Configuration Specifications • Cluster Solution based on the Appro GreenBlade System • Up to 80 DP GreenBlades™/640 cores per 42U Rack • Up to 5.12TB of system memory • Up to 80TB of local storage • Fully populated rack weighs ~1,749 lbs • Rack-level power consumption is ~16kW to 32kW • 20% power reduction per node compared to 1U servers • Supports multi-configuration and interconnect options • Standards IPMI or Appro BladeDome remote server mgmt • Choice of open software solutions using Rocks+ and MOAB • Ideal for mid to large-sized HPC and high-density computing
HyperGreen™ Cluster :: One Scalable Unit – Cluster Recipe • Specifications • Supports up to 8x Sub-Racks • Supports up to 80x DP Compute Blades • Use one SXB100 or 200 as Master node • Some blades can be configured as network storage nodes for the cluster • Supports up 7.6TF per rack • Supports up to 5.12TB of system memory • Support up to 160x internal 2.5” HDDs, equal to 80TB of local storage • 2RU rack space left for switches • IB switches should be installed in a separate infrastructure rack • Fully populated rack weighs ~1,749 lbs • Depending on system configuration, rack-level power consumption is 16kW to 32kW
HyperGreen™ Cluster :: 136 Node- Sample Cluster Configurations 7U 144p IB SW • Specifications – 136 Node Cluster • 14x Sub-Racks • 140x DP Compute Blades • 4x spare Compute Blades • 13TF Cluster (using 3GHz CPU) • 6.5TB of system memory • 2U Management Node • 1U Rackmount LCD/KVM • 7U 144p DDR IB Switch 2U Mgmnt Server 1U LCD KVM
14U 288p IB SW 2U Mgmnt Server HyperGreen™ Cluster :: 288 Node- Sample Cluster Configurations • Specifications – 288 Node Cluster • 29x Sub-Racks • 290x DP Compute Blades • 2x spare Compute Blades • 27.6TF Cluster (using 3GHz CPU) • 13.8TB of system memory • 2U Management Node • 14U 288p DDR IB Switch
2U Mgmnt Server 30U 864p IB Switch HyperGreen™ Cluster :: 864 Node- Sample Cluster Configurations • Specifications – 864 Node Cluster • 88x Sub-Racks • 880x DP Compute Blades • 16x spare Compute Blades • 82.9TF Cluster (using 3GHz CPU) • 41.4TB of system memory • 2U Management Node • 30U 864p DDR IB Switch
Appro Products and Solutions Appro , Premier Provider of Scalable Supercomputing Solutions: 4/16/09 Anthony Kenisky, Vice President of Sales