230 likes | 524 Views
Lecture 5 FAN and SOCRATES. FAN – Multiple Backtrace (1983) SOCRATES – Learning (1988) Test Generation Systems Test Compaction Summary. FAN -- Fujiwara and Shimono (1983). New concepts: Immediate assignment of uniquely-determined signals Unique sensitization
E N D
Lecture 5FAN and SOCRATES • FAN – Multiple Backtrace (1983) • SOCRATES – Learning (1988) • Test Generation Systems • Test Compaction • Summary VLSI Test: Lecture 5
FAN -- Fujiwara and Shimono(1983) • New concepts: • Immediate assignment of uniquely-determined signals • Unique sensitization • Stop Backtrace at head lines • Multiple Backtrace VLSI Test: Lecture 5
PODEM Fails to Determine Unique Signals • Backtracing operation fails to set all 3 inputs of gate L to 1 • Causes unnecessary search VLSI Test: Lecture 5
FAN -- Early Determination of Unique Signals • Determine all unique signals implied by current decisions immediately • Avoids unnecessary search VLSI Test: Lecture 5
PODEM Makes Unwise Signal Assignments • Blocks fault propagation due to assignment J= 0 VLSI Test: Lecture 5
Unique Sensitization of FAN with No Search • FAN immediately sets necessary signals to propagate fault Path over which fault is uniquely sensitized VLSI Test: Lecture 5
Headlines • Headlines H and J separate circuit into 3 parts, for which test generation can be done independently VLSI Test: Lecture 5
Contrasting Decision Trees FAN decision tree PODEM decision tree VLSI Test: Lecture 5
Multiple Backtrace FAN – breadth-first passes – 1 time PODEM – depth-first passes – 6 times VLSI Test: Lecture 5 PODEM – Depth-first search 6 times
AND Gate Vote Propagation [5, 3] • AND Gate • Easiest-to-control Input – • # 0’s = OUTPUT # 0’s • # 1’s = OUTPUT # 1’s • All other inputs -- • # 0’s = 0 • # 1’s = OUTPUT # 1’s [0, 3] [5, 3] [0, 3] [0, 3] VLSI Test: Lecture 5
Multiple Backtrace Fanout Stem Voting [5, 1] • Fanout Stem -- • # 0’s = S Branch # 0’s, • # 1’s = S Branch # 1’s [1, 1] [3, 2] [18, 6] [4, 1] [5, 1] VLSI Test: Lecture 5
Multiple Backtrace Algorithm repeat remove entry (s, vs) from current_objectives; If (s is head_objective) add (s, vs) to head_objectives; else if (s not fanout stem and not PI) vote on gate s inputs; if (gate s input I is fanout branch) vote on stem driving I; add stem driving I to stem_objectives; else add I to current_objectives; VLSI Test: Lecture 5
Rest of Multiple Backtrace if (stem_objectives not empty) (k, n0 (k), n1 (k)) = highest level stem from stem_objectives; if (n0 (k) > n1 (k)) vk = 0; else vk = 1; if ((n0 (k) != 0) && (n1 (k) != 0) && (k not in fault cone)) return (k, vk); add (k, vk) to current_objectives; return (multiple_backtrace (current_objectives)); remove one objective (k, vk) from head_objectives; return (k, vk); VLSI Test: Lecture 5
SOCRATES Learning (1988) • Static and dynamic learning: • a = 1 f = 1 means that we learn f = 0 a = 0 by applying the Boolean contrapositive theorem • Set each signal first to 0, and then to 1 • Discover implications • Learning criterion: remember f = vf only if: • f=vf requires all inputs of f to be non-controlling • A forward implication contributed to f=vf VLSI Test: Lecture 5
Improved Unique Sensitization Procedure • When a is only D-frontier signal, find dominators of a and set their inputs unreachable from a to 1 • Find dominators of single D-frontier signal a and make common input signals non-controlling VLSI Test: Lecture 5
Constructive Dilemma • [(a = 0) (i = 0)] [(a = 1) (i = 0)] (i = 0) • If both assignments 0 and 1 toamakei = 0,theni = 0 is implied independently ofa VLSI Test: Lecture 5
Modus Tollens and Dynamic Dominators • Modus Tollens: (f = 1) [(a = 0) (f = 0)] (a = 1) • Dynamic dominators: • Compute dominators and dynamically learned implications after each decision step • Too computationally expensive VLSI Test: Lecture 5
An ATPG System Random pattern generator Fault simulator yes Fault coverage improved? Random patterns effective? Deterministic ATPG (D-alg. or Podem) Save patterns no no yes yes no Coverage Sufficient? Compact vectors VLSI Test: Lecture 5
Random-Pattern Generation • Easily gets tests for 60-80% of faults • Then switch to D-algorithm, Podem, or other ATPG method VLSI Test: Lecture 5
Vector Compaction • Objective: Reduce the size of test vector set without reducing fault coverage. • Simulate faults with test vectors in reverse order of generation • ATPG patterns go first • Randomly-generated patterns go last (because they may have less coverage) • When coverage reaches 100% (or the original maximum value), drop remaining patterns • Significantly shortens test sequence – testing cost reduction. • Fault simulator is frequently used for compaction. • Many recent (improved) compaction algorithms. VLSI Test: Lecture 5
Static and Dynamic Compaction of Sequences • Static compaction • ATPG should leave unassigned inputs as X • Two patterns compatible – if no conflicting values for any PI • Combine two teststaandtbinto one testtab=ta ∩ tbusing intersection • Detects union of faults detected bytaandtb • Dynamic compaction • Process every partially-done ATPG vector immediately • Assign 0 or 1 to PIs to test additional faults VLSI Test: Lecture 5
Compaction Example • t1= 0 1 X t2 = 0 X 1 t3 = 0 X 0 t4 = X 0 1 • Combinet1andt3, thent2andt4 • Obtain: • t13= 0 1 0 t24 = 0 0 1 • Test Length shortened from 4 to 2 VLSI Test: Lecture 5
Summary • Most combinational ATPG algorithms use D-algebra. • D-Algorithm is a complete algorithm: • Finds a test, or • Determines the fault to be redundant • Complexity is exponential in circuit size • Podem is another complete algorithm: • Works on primary inputs – search space is smaller than that of D-algorithm • Exponential complexity, but several orders faster than D-algorithm • More efficient algorithms available – FAN, Socrates, etc. • See,M. L. Bushnell and V. D. Agrawal, Essentials of Electronic Testing for Digital, Memory and Mixed-Signal VLSI Circuits, Springer, 2000, Chapter 7. VLSI Test: Lecture 5