80 likes | 94 Views
This project involves the development of advanced technologies for space telescopes, including a fully active subscale telescope and AI-based, self-correcting edge sensors. The project also includes the use of cluster computing for segmented mirror control.
E N D
Blue Line Engineering SBIRs • NAS8-99081 • Fully Active Subscale Telescope (FAST) • NAS8- 01034 • AI Based, Self-Correcting, Self-Reporting Edge Sensors • MSFC CDDF • Marshall Optical Control Cluster Computer (MOC3)
Blue Line Engineering NAS8-99081 • Fully Active Subscale Telescope (FAST) • Phase II completion date: March 26, 2002 • Objectives: • 1/8 Scale model of NGST yardstick • Highly versatile testbed for NASA researchers • Demonstration events in lab and exhibit hall • Testbed Components • Hinges, latches, actuators, and deployment mechanisms • Seven, 33 cm diameter primary mirror segments • Electronics for static figure correction & maintenance • Motorized Stow/Deploy • Diffraction-limited performance (l>2 microns)
Optical Design Xinetics . NAS8-98243 Large, Cryogenic Ultralightweight Mirror Technology Aperture: equivalent to 92.5 cm dia. filled circular (0.672 m2) Obscuration: <10% Stowed: cyclinder 50 cm diam X 100 cm tall Prescription: parabolic, f/1.25, 2.5m focal length FOV: >4 arc minutes Segments: hexagonal FTF diameter: 33.3 cm Thickness: 1.8 cm Mass: < 1 kg/segment (35 kg total including electronics) Performance: Diffraction limit at 2 µm (l/14 = 143nm ~ 1/4 wave visible)
NAS8- 01034 • AI Based, Self-Correcting, Self-Reporting Edge Sensors • Phase I completion date: August 17, 2001 • Objective feasibility of enhanced edge sensors todeploy, align, and phase match the primary mirror segments of space based telescopes • Design Features • operational env.: 30 °K >T> 370 °K • fuzzy logic • health & status monitoring • self-reporting • neural networks • self-correcting • self-tuning. • new error compensation methods • super accuracy • multi-mode measurements • phasing • gap
Phase I • experimental testing • computer simulation and modeling. • In Phase II • two standard model edge sensors • developed, • fully characterized • documented.
MSFC CDDFMarshall Optical Control Cluster Computer (MOC3) • Project Schedule: FY01 & FY02 • Investigators • PI:John Weir/ED19 • Co-I: Donald Larson/SD71 • Objectives • 103 fold increase in computing capability for managing active primary mirror segments • improved techniques for minimizing wave front error. • experience • parallel computing technologies and software • ground-based computer clusters • embedded clusters in future spacecraft Beowulf Cluster Computer [after Ridge et al, 1997]
MSFC CDDFMarshall Optical Control Cluster Computer (MOC3) • Plan: • Purchase a Beowulf computer cluster and associated Linux software. • Utilize the Beowulf in conjunction with optical test beds to develop • the use of cluster computing for segmented mirror control. • software for astronomy and wave front control, and • application program - distributed computing (e.g. Fortran 99). • Beowulf Background • technology of clustering Linux computers to form a parallel, virtual supercomputer. • one server node with client nodes connected together via Ethernet or some other network. • no custom components; mass-market commodity hardware • PC capable of running Linux, • Ethernet adapters • switches. • Intiated in 1994 • NASA High Performance Computing and Communications program • Earth and space sciences project at the Goddard Space Flight Center. • In October of 1996 • Gigaflops sustained performance on a space science application for cost under $50K.
MSFC CDDFMarshall Optical Control Cluster Computer (MOC3) 7 Slave Node(s) 4U Rackmount ATX Case with 250 Watt UL Power Supply Dual Processor, 1 Ghz Intel Pentium III, 512 MB RAM, 20 GB HD Dolphin Interconnect’s Wulfkit Head Node 4U Rackmount ATX Case with 250 Watt UL Power Supply Dual Processor, 1 Ghz Intel Pentium III Dual Processor, 1 Ghz Intel Pentium III, 512 MB RAM, 20 GB HD 32x CD-R/W, SVGA with 32 MB, Tape back-up Dolphin Interconnect’s Wulfkit Accessories UPS Network Switch KVM Switch Rackmount Cabinet “Huinalu”at MHPCC: 260 dual PIII 933 MHz nodes, each Software: Enhanced Red Hat Distribution Linux v 7.0 Portland Group Workstation 3.1 Compilers for C PVM, MPICH, LAM-MPI Communication Libraries ScaLAPACK with ATLAS Libraries Portable Batch System (PBS) Parallel Virtual File System PVFS Doglsed Administration and Monitoring Tool Lesstiff, Mesa (OpenGL), IBM Data Explorer SCA Linda (4 CPUs) MI/NASTRAN for the PC from Macro Industries