100 likes | 204 Views
Lunarc history. 1986 - 1988 IBM 3090 150/VF 1988 - 1991 IBM 3090 170S/VF 1991 - 1997 Workstations, IBM RS/6000 1994 – 1997 IBM SP2, 8 processors. 1998 Origin 2000, 46 processors, R10000 1999 Origin 2000, 100 processors, R12000, 300 Mhz
E N D
Lunarc history 1986 - 1988 IBM 3090 150/VF 1988 - 1991 IBM 3090 170S/VF 1991 - 1997 Workstations, IBM RS/6000 1994 – 1997 IBM SP2, 8 processors. 1998 Origin 2000, 46 processors, R10000 1999 Origin 2000, 100 processors, R12000, 300 Mhz 2000 Origin 2000, 116 processors, R12000, 300 Mhz 2000 Beowulf Cluster, cluster with 40 AMD 1.1 GHz cpus 2001 64 of the Origin 2000 processors were relocated to NSC. 2002 A 64 processor cluster. AMD Athlon 1900+ (WhenIm64) 2003 128 processors added (Toto7). Intel P4 2.53 GHz
Current hardware • Husmodern, cluster • 32 nodes, 1,1GHz AMD Athlon, 001201 • WhenIm64/Toto7, clusters • 65 noder, AMD 1900+, 020408 • 128 noder, P4 2.53 GHz, 030218 • Fileserver, login nodes etc • Ask, SGI Origin 2000 • 48 nodes, R12000, 300 MHz, 12Gb
About Lunarc • Current staff • 1.3 fte • Future Administration • 2.5 fte ( minimum, depending on contract formulations)
Current users • Core groups • Theoretical chemistry, Physical Chemistry2, Structural Mechanics • Other large users • Fluid Mechanics, Fire safety engineering, Physics • New groups • Inflamational Research, Biophysical Chemistry, Astronomy
Lunarc web • User registration • System information • System usage • Job submission ?
Using clusters • Log in • Use ssh, unix tools etc • mkdir proj • sftp/scp user@... • vi/joe submit script • Submit script documentation • Queue management • qsub script • Transfer result files back • sftp/scp For many, this is a straightforward process, but why do we get so many questions??
Good knowledge about local circumstances • Traditional users -> clusters -> grids • User interface • Grid of clusters