100 likes | 238 Views
CENG 532 - Distributed Computing Systems. Measures of Performance . Grosch’s Law-1960’s. “ To sell a computer twıce as much, ıt must be four times as fast” It was Ok at the time, but soon it became meaningless After 1970, it was possible to make faster computers and sell even cheaper….
E N D
CENG 532- Distributed Computing Systems Measures of Performance
Grosch’s Law-1960’s • “To sell a computer twıce as much, ıt mustbe four times as fast” • It was Ok at the time, but soon it became meaningless • After 1970, it was possible to make faster computers and sell even cheaper…. • Ultimately the switching speeds rach a limit, the speed of the light on an integrated circuit…
Von Neumann’s Bottleneck • Serial single processor computer architectures followed John Von Neumann’s architecture of 1940-1950. • One processor, single control unit, single memory • This is no more valid: Low cost parallel computers can easily deliver the performance of the fastest single processor computer…
Amdahl’s Law; 1967 • Let speedup (S) be ratio of serial time (one processor) to parallel time (N processors) S=T1/TN < 1/f Where f is the serial fraction of the problem, 1-f is the parallel fraction of the problem, then Tn= T1*f+T1(1-f)/N • S=1/(f+(1-f)/N), thus s<1/f
Amdahl’s Law; 1967 • At f=0.10, Amdahl’ Law predicts, at best a tenfold speedup, which is very pessimistic • This was soon broken, encouraged by Gordon Bell Prize!
Gustafson-Barsis Law; 1988 • The team of researchers of Sandia Labs (John Gustafson and Ed Barsis) , using 1024 processor nCube/10, overthrew Amdahl’s Law, by achieving 1000 fold speedup with f-0.004 to 0.008. • According to Amdahl’s Law, the speedup would have been from 125 to 250. • The key point was that 1-f was not independent of N.
Gustafson-Barsis Law; 1988 • They interpreted the speedup formula, by scaling up the problem to fit the parallel machine: T1=f+(1-f)N TN=f+(1-f)=1, then the speedup can be computes as S=N-(N-1)f
Extreme case analysis • Assuming Amdahl’s Law, an upper and lower bound can be given for the speedup, under unrealistic assumptions: N/log2N <= S <= N where N is based on single processor logN is based on divide and conquer
Inclusion of the communication time • Some researchers (Gelenbe) suggests speedup to be approxımated by S=1/C(N) where C(N) is some function of N • For example, C(N) can be estimated as C(N)=A+Blog2N where A and B are constants determined by the communication mechanisms
Benchmark Performance • Benchmark ıs a program whose purpose is to measure a performance characteristicof a computer system, such as floating point speed, I/O speed, or for a restricted class of problems • The benchmarks are arranged to be either • Kernels of real applications, such as Linpacks, Livermore Loops, or • Synthetic, approximating the behavior of the real problem, such as Whetstone and Wichmann…