180 likes | 293 Views
A Fast Sequential Learning Technique for Real Circuits With Application to Enhancing ATPG Performance. Aiman H. El-Maleh Department of Computer Engineering King Fahd University of Petroleum & Minerals. Outline. Motivation Sequential learning technique Implication learning
E N D
A Fast Sequential Learning Technique for Real Circuits With Application to Enhancing ATPG Performance Aiman H. El-Maleh Department of Computer Engineering King Fahd University of Petroleum & Minerals
Outline • Motivation • Sequential learning technique • Implication learning • Tie gate learning • Enhancing ATPG performance • Experimental results • Conclusions
Motivation • Learning can significantly enhance a search process • Reduce the number of decision nodes • Identify conflicts sooner and reduce number of backtracks • Learning used in CAD, most notably in ATPG • Learning typically performed in combinational logic • Sequential ATPG more complex than combinational ATPG due to signal dependencies across frames • Density of encoding is a key factor of complexity of sequential ATPG
T0 T1 T2 T3 0 1 1 G6 G4 G2 S G3 G5 G1 0 0 1 G1=1 G2=0 G3=0 G4=1 G3=0 at T=i G2=0 at T=i-1 G5=1 G6=1 G5=1 at T=i G2=0 at T=i-2 Sequential learning technique • Identify fanout stems • For each stem • Inject a logic 0 and propagate forward across frames • Inject a logic 1 and propagate forward across frames 1 1 1 0 1 0
0 0 0 0 0 1 1 1 1 0 0 1 0 1 1 1 1 1 G7=1 F1=1 F2=1 F2=1 F3=1 F4=1 G8=0 F1=0 Single-node learning: Example G3 G1 G7 F1 F3 G4 0 G9 l1 1 G5 F4 G8 F2 G2 G6
T0 T1 T2 T3 G2 G3 G4 1 1 1 G5=1 G5 1 1 1 1 1 S3 S2 S1 S1 G1 G1=1 0 0 Multiple-node learning • For each gate with multiple stems implying the same value on the gate • Inject corresponding contrapositive values on the stems • Propagate values concurrently forward across time frames 1 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 F2=1 F3=1 I1=0 I1=0 I1=0 F2=0 at T=1 F2=0 at T=1 F2=0 at T=2 Multiple-node learning: Example 1 G2 F2 G6 1 G8 G3 G1 G9 F1 I1 Y G4 1 F3 G7 G5
1 1 1 1 F1 G1 0 0 1 1 1 0 1 0 1 1 0 F2 1 1 1 1 1 0 1 1 1 F3 0 0 0 F4 0 0 G9=0 F2=0 F5 I1=0 I2=0 G9=1 at T=1 G9=1 at T=1 Multiple-node learning: Example 2 I1 G6 G2 I2 G7 G9 0 G3 G4 G8 G5
0 0 0 0 0 0 0 0 1 1 G7=1 F3=1 Use of gate equivalence in learning • Gate equivalence enables value propagation G2 is equivalent to G3 F1 G2 G5 G7 1 0 G6 F2 G3 0 G1 G8 I1 1 G4 F3 I3
0 0 G I 0 1 0 G1 0 0 0 1 0 0 I1 G2 I2 0 0 1 0 G3 0 Learning of tied gates • Case 1: Based on single-node learning G is tied to 0 • Case 2: Basedonmultiple-node learning G2 and G3 are tied to 0 1
Practical issues • Multiple clock domains • Classify sequential elements based on driving clock • Latches and FFs treated separately • Perform learning for each class • Set/Reset handling • Set/Reset in a sequential element can invalidate the propagated value • Rules for value propagation across a sequential element: • Propagate a 0 if there is no set line • Propagate a 1 if there is no reset line
I1 0 F1 G1 I2 G6 0 F2 G2 I3 G7 G9 F3 G3 I4 0 0 F4 G4 I5 G9=0 F2= 0 _ _ G8 1 1 F5 G5 I6 Known- vs. forbidden-value relations • Known-value implications eliminate decisions 0 0 G9=0 F2=0
I1 G1 F1 I2 0 I3 G2 F2 G4 G5 I4 1 I6 G6 I5 0 0 F3 1 F4 G3 _ _ 0 0 F4=1 G4=1 G4= F4=1 Extra requirements by implications • Known-value implications can cause additional unnecessary requirements 1 S0 X
I1 0 F1 G1 I2 G6 0 F2 G2 I3 G7 G9 F3 G3 I4 0 0 F4 G4 I5 G9=0 F2= 0 _ _ G8 1 1 F5 G5 I6 Suggested use of implications • Guide decision to inputs with forbidden non-controlling value 0
Experimental results • Benchmark circuits • ISCAS 89 and 93 circuits • Four retimed circuits (high sequential ATPG complexity) • Three industrial circuits • Sequential learning performed up to 50 time frames • Sequential ATPG • Without sequential learning • With learned relations used as known-value implications • With learned relations used as forbidden-value implications • Test Coverage = Detected/(Total - Untestable)
Circuit s5378 s6669 s13207 s15850 s38417 s510jcsrre s510josrre s832jcsrre scfjisdre Industrial 1 Industrial 2 Industrial 3 Sequential learning results Learned Relations FFs Gates FF-FF Gate-FF CPU(s) 179 2779 250 2233 6.42 239 3080 24 1603 0.39 638 7951 1566 35093 23.08 597 9772 1516 29378 42.04 1636 22179 1554 46981 30.24 26 243 127 891 0.10 28 243 50 484 0.07 27 195 125 743 0.11 20 764 22 1980 0.56 460 8693 118 6774 2.74 7068 63156 2069 36397 24.31 15689 681595 8016 186930 403.30
Conclusions • A novel and efficient method for sequential learning • Identifies implications, invalid states and tied gates • Handles industrial circuits with multiple clock domains and partial or no set/reset • Increases number of detected and untestable faults • Significantly reduces sequential ATPG time • Applicable to redundancy identification, logic optimization, and logic verification