120 likes | 136 Views
This article discusses the firewall requirements for distributed and grid computing, including applications, web servers and services, hardware, and access control. It also explores the networking needs for accessing external networks and storage facilities in a campus-wide grid.
E N D
Firewall Requirements for Distributed and Grid Computing Mary Thomas Dept. of Computer Science, Director, Advanced Computing Environments Lab http://acel.sdsu.edu
D&G Firewall requirements • Applications • Web servers & services • Grid servers & services • Hardware in ACELab • My Access • My List
Projects/Applications • Communities we interact with: • SDSU: Frost, Kelley/Edwards/Roher (bio), Impelluso, Paolini (ME), Castillo (CSRC) • External: NSF/DOE projects, TeraGrid, DOE Science Grid/Fusion Grid • International: ITER (next year) • Research (Thesis), development & production (specific projects, eg a bio portal for • SDSU & External communities • Host/Develop services: • Web and Grid • Informational, Interactive (secure access) • Web portals (multiple) • Lab & research project websites • science application( eg Rob Edwards Metagenomics portal).
Web (specific) • Web Portals: • HTML, Perl/CGI, Python/Zope, Java (JSP, Servlets, Portlets), and whatever else comes along • Web services (SOAP/HTTP) • Server & services hosting env.: • TCP/IP, HTTP/HTTPS over SSL • Java/Tomcat/Apache • Python & Perl • Port 80, but also need test&dev range
Grid (configurable port#’s) • Informational: no security involved • Interactive: • transactions require authentication, authorization • GRAM (#), GridFTP(#) • Cert. Authority (CA; PKI), MyProxy cred. server • Toolkits: • Globus/IBM Toolkit, GridPort, • JavaCoG, etc. • MyProxy • Web Services Resource Framework (WSRF)
Individual • Access rohan, sciences, projects mail servers from my Thunderbird client running laptop off campus • SSH/SCP access to rohan and all my dev systems • Off site access to my development portal systems
Hardware (current but growing) • 4 Dell 1850 servers, several student systems • 1 6 TB storage facility • All need remote (off campus) access vis SSH, but also to access dev. services • Gig-E network (in progress): • TeraByte files (1012 = 1million x 1million). • Between campus & TG node at SDSC (via Frost lab?)
Hosting Services Old style Grid: Globus GRAM(2119) GRIS(2135) GIIS(2136) GridFTP(2811) MyProxy(7510) WSRF: any port range can work Access Grid (xx) Web (80, config) Remote clients: access to these services by fixed or dynamic clients Also: HTTP (80, range) SSH (22?) HTTPS (448) Requirements: Ports (current list)
Requirements: Network • Need minimally Gig-E network capable of moving TByte files between computational resource (cluster) & storage facility • Need for campus wide network • Need to access: • external networks: National Light Rail (NLR), Internet2, Optiputer, TeraGrid, etc. • PetaByte (1000 Tbyte) storage facilites: SDSC; other NSF projects (Human Genome, Astronomy, High Energy Physics, etc.)
Campus Grid External Ext nets NOC Gig-E (ad hoc/Frost lab) SDSU NLR/Internet2 TeraGrid Switch Kelly Lab Storage Edwards/Rowher Lab HPC PGE Cluster Viz ACEL( GMCS) Switch Frost Lab Switch Storage PGE Cluster Switch PGE Cluster ACEL Services PGE Cluster PGE Cluster Switch PGE Cluster PGE Cluster Servers Workstations Storage Main campus Viz Services
SDSU Data Grid Proposal(forming plans now) • Goal: Obtain NSF funding to instantiate a major campus data grid (storage, security) • All campus research projects with large data/distributed/grid will have these requirements • NSF CFP (due July): campus research infrastructure; 5 years of funding • Management Teams (funding+support): • CSRC, Jose Castillo (main host/admin) • Campus IT (networks, accounts, etc.) • Campus Security (authentication, ports, etc) • Faculty/researchers use, advise, allocations, help fund
Contact Information • Mary Thomas (4-1694): • mthomas@sciences.sdsu.edu • ACE Lab: • http://acel.sdsu.edu • SDSU Data Grid Project: • http://pipeline3.acel.sdsu.edu:8080/datagrid • Globus: • http://www.globus.org