190 likes | 366 Views
Comments on The Progress of Computing William Nordhaus. Iain Cockburn Boston University and NBER. Key findings. Computer performance (per $ or per labor hour) has increased by 10 12 since 1900 All the gains since 1940: post-war CAGR of performance about 55%. Conclusion.
E N D
Comments onThe Progress of ComputingWilliam Nordhaus Iain Cockburn Boston University and NBER
Key findings • Computer performance (per $ or per labor hour) has increased by 1012 since 1900 • All the gains since 1940: post-war CAGR of performance about 55%
Conclusion • “Output” measures of quality-adjusted prices decline much faster than “input-based” hedonics • standard hedonics “may be far wide of the mark”
Not so fast… • Hedonics not intended to measure productivity, rather changes in WTP for characteristics • PPI prices a much bigger bundle than “horsepower” • What are appropriate output performance measures? • What are the connections between performance, pricing, and productivity?
Benchmarking Computer scientists all say “execution time for your application”
Metrics of Performance Application Answers per month Operations per second Programming Language MSOPS Compiler (millions) of Instructions per second: MIPS (millions) of (FP) operations per second: MFLOPS system architecture CPU Datapath Megabytes per second Control Function Units Cycles per second (clock rate) Transistors Wires Pins
Output measures for computing • Computation throughput: “information per second”, MSOPS • I/O bandwidth • Availability/uptime • Latency • Transaction processing time/integrity • Switching/routing efficiency • Accuracy: error rates/correction, rounding, correspondence to physical systems • Application execution time • “interface speed” : page down, recalc, redraw • program load • task completion time: database query, matrix inversion, spell check
MSOPS-constrained scientific computing: 3D fluid dynamics (weather forecasting) geophysics engineering structural analysis (airframes) molecular modeling bioinformatics simulation BLP MSOPS-constrained commercial computing: animation/graphics optimization/search problems “data mining” reservoir modeling automotive/aerospace design protein folding 50% of “Top-500” computer users are now industrial Where do more/cheaper MSOPS make a big difference?
Classes of problems where a faster processor makes little difference • High-bandwidth/high overhead networks • WWW searches • Transaction processing • IO-constrained activities • e.g. waiting for user input
True cost of computing: hardware=20% In MIS, commonly refer to TCO - “Total CostofOwnership” • Hardware cost (of which arch. about 30%) plus: • Support • Personnel training • Application development • Upgrades • Consumables • Downtime • Security • Depreciation etc. etc.
1 ExTimeold ExTimenew Speedupoverall = (1 - Fractionenhanced) + Fractionenhanced Speedupenhanced Moore’s Law vs. Amdahl’s Law • Moore’s law: geometric progression in performance measures (so far) • Amdahl’s law: diminishing returns to speeding up small fractions of a task: e.g. Floating point instructions improved to run 2X; but only 10% of actual instructions are FP Speedup overall = 1.053
Peak power, unused FLOPs, option value • More than 95% of PC computation capacity “idle” • Projects to harness idle time through distributed computing • SETI@home, Condor, Entropia, Compute-Against-Cancer, Folding@home, NASA/NSF “grid computing” • What has happened to $ per used MSOP? • If we are buying an option to use peak performance in bursts, how to think of pricing that?
WTP for performance? • Decreasing marginal utility of anything • MSOPS doesn’t fully capture aspects of performance that matter to users • What’s the choice set?
Suggestions • Think about pricing a richer notion of “output” of computing devices • application execution time • IO capacity (+ connectivity) • portability/scalability • Investigate what it is that economic actors value when purchasing computer power • Puzzles: • PC vs. time-sharing mainframe • “pretty pictures” & the dancing paperclip