270 likes | 345 Views
A Statistical Design Method for Giga Bit Memory Arrays and Beyond. C. Caillat (*) , E. Carman, J.M. Daga, C. Ouvrard and P. Bauser, Innovative Silicon SA, Lausanne, Switzerland. (*) now with Micron Technology Belgium. Outline. Introduction Principle of the Method & Theoretical Background
E N D
A Statistical Design Method for Giga Bit Memory Arrays and Beyond C. Caillat (*), E. Carman, J.M. Daga, C. Ouvrard and P. Bauser, Innovative Silicon SA, Lausanne, Switzerland (*) now with Micron Technology Belgium
Outline • Introduction • Principle of the Method & Theoretical Background • Application Example: Case of a 1Gb Z-RAM Floating-Body Memory Array • Conclusions and Perspectives
Outline • Introduction • Principle of the Method & Theoretical Background • Application Example: Case of a 1Gb Z-RAM Floating-Body Memory Array • Conclusions and Perspectives
Introduction • This paper deals with a statistical sampling method intended to study worst-case configurations of electrical parameters in large memory arrays • The goal is to help assessing the robustness of designs to local parameter fluctuations in a reasonable number of simulation runs • These fluctuations can impact functionality through critical electrical responses – for example voltage read margin resulting from the variation of sense-amplifier characteristics and cell signal fluctuations • Hence, worst cases are defined as the combinations of parameters yielding the largest response degradation within a given array (size is fixed) • To identify such worst cases for relevant responses, the classical approaches are Monte-Carlo sampling (CPU intensive) or Corner sampling (risk of over-design and/or non functional corners) • Our approach is based on a sampling of iso-probable extreme events • Application boundaries • Normally distributed and independent input variables • Non-exotic response: the response must be monotonic towards the input parameter variations. Example: the signal should not be decreasing and then increasing again with the growth of a parameter • Local fluctuations only: die to die components are not considered here
Illustration of the Worst Case Margin Margin at a given probability (# of events per array) Probability Density Function (PDF) x1 x2 V/I Sense-Amp margin distribution Cell signal distribution Questions: for a given array size, what is the lowest observable margin (=x2-x1), and for which (x1,x2) pair(s) does it happen ?
Outline • Introduction • Principle of the Method & Theoretical Background • Application Example: Case of a 1Gb Z-RAM Floating-Body Memory Array • Conclusions and Perspectives
Sampling Principle Assuming two independent random parameters (x1,x2), there are several ways to sample the parameter space x1 This proposal: sampling exclusively on the extreme horizon of iso-probability = very few runs ! Event horizon in an array A Monte-Carlo simulation approach fails to reach the extreme cases on very large arrays, and trying to extrapolate from this region would be inaccurate A classical DOE approach may deviate substantially from the actual worst cases and show false non functionality or non linearity x2 The contour of iso-probable events can be calculated theoretically
Joint Probability Density Function Key theoretical element: joint Probability Density Function (PDF) of two normally distributed variables (x, y) of average (µx, µy) and standard deviation (x, y), respectively (*): (1) Iso-density curve definition for (x, y) pairs, with z = constant (the curve is an ellipse, as illustrated to the right): (2) Iso-density ellipse in red (*) AKA bivariate normal distribution Note: illustration examples and formulas are given with a two parameter case, and generalized to n dimensions hereafter
Iso-Probability Sets • It can be demonstrated, through basic statistics (see details in appendix), that the above iso-density is also an iso-probability curve • The corresponding probability is z, expressed as a standardized sigma value of the normal distribution • In other words, choosing (x,y) pairs such that guarantees that these events all belong to an iso-probability of ±6 sigma i.e. have one chance out of 1 billion to happen they represent the set of worst cases that we are looking for (z= 6 in this example)
Sense-Amplifier Case and General Formulas In practice, however, not all parameters belong to the same population. Thus, the sense-amplifier (SA) is a sub-population of the entire array: it is shared by many cells. The excursion of the associated variables is hence bounded by a frontier defined as follows, ZSA representing the lowest probability of occurrence for a SA in the array, expressed in standardized sigmas: XSA ~N(µSA,SA) (3) Finally, the above formulas (2) and (3) can be generalized to an n-dimensional parameter space: (4) (5) Iso-probability n-dimensional ellipsoid for a set of random variables Xi Generalized boundary condition for a subset of SA-related variables
Examples of Resulting Sampling Surfaces Two variables Three variables -ZSA +ZSA XSA XSA Bounded excursion for the SA variations YSA YSA XSA Iso-probability curve example for a 1 Gb array with 512k SA, 156 samples. Scales represent standardized sigma values Iso-probability surface example for a 1 Gb array with 512k SA, two SA variables (XSA, YSA) and one cell variable, 2000 samples, scales in sigma. (a) 3-D view (b) Projection matrix
Outline • Introduction • Principle of the Method & Theoretical Background • Application Example: Case of a 1Gb Z-RAM Floating-Body Memory Array • Conclusions and Perspectives
Foreword: Response Analysis • Once a sampling plan is established, the next steps are • To simulate a subset of samples on the selected iso-probability surface – i.e., corresponding to the requested level of occurrence, such as 1 event per array of 1Gb, for example • To extract the relevant electrical responses from the simulation tool (SPICE for example) • Then to search for the minimum (or maximum) of these responses and to check if these conditions guarantee the circuit functionality or not • We applied the method of this paper to assess the robustness of a 1Gb Z-RAM floating body memory design to cell and sense-amplifier fluctuations during a Read ‘1’ operation • We will here focus on the case of a simple response, a signal margin, that does not require actually simulating the experimental points, but only calculating them with a known formula (see hereafter)
Z-RAM Floating-Body Memory Cell The memory cell studied is a Z-RAM vertical double gate floating body memory device. It has been characterized and optimized in TCAD. The metric monitored is the charge transferred during a 3ns read ‘1’, as illustrated below. This is referred to as the cell signal hereafter and expressed in arbitrary units (a.u.). Transient simulations during a read '1' (a) or a read '0' (b) operation. Scale: density of electron current (A.cm-2). T= 366K. 40nm pillar diameter assumed. Typical Read '1' & '0' waveforms used for transient cell characterization in TCAD
Main Cell Parameter Variations After a sensitivity study, two physical parameters were found to significantly impact the cell signal: the Random Dopant Fluctuation (RDF), and the pillar diameter. The relative effects of these parameters – with respect to the nominal conditions – have been characterized in TCAD for a ±6 variation range using simplifying assumptions (continuous doping, simple diameter offset). Linearity of effects allows a linear interpolation over the full range Note: the Poisson distribution was used to calculate the extreme doping values. Diameter range assumes 3 = 1nm of local fluctuation.
Sense-Amplifier Variations The variations of the sense-amplifier switching point due to transistor mismatch have been characterized with the actual circuit configuration using HSPICE coupled to a classical Monte-Carlo generator to cover a ±3 excursion range T = 366K 1000 Monte-Carlo runs (read ‘1’) (a.u.) (a.u.) SA composed of a pre-amplifier connected to memory cells through the Bit Lines (tbl), plus a latch connected to the output of the pre-amplifier (paout) 0 1 2 3 4 time (ns) Technique used: the cell signal was swept until residual fails on the output (saout) were eliminated, in order to find the critical SA signal (switching point) due to degraded pre-amplifier output and latch offset.
Sampling Surface With the 3 parameters listed above, we have generated a sampling plan using a tool that we have developed under Excel/VBA. The 500 samples generated are represented in a projection matrix below and belong to an iso-probability set of 1 event out of 1Gb. Scales are relative cell signal (a.u).
Resulting Cell Signal Excursion The studied response is a signal margin, defined as the difference between the cell signal during a read ‘1’ and the SA switching point: Mavg is the average margin (constant term). SRDF, Sdiam = cell signal fluctuations due to Random Dopant Fluctuation (RDF) and device diameter variations (diam). SAoffset= SA switching point (offset) fluctuation. A successful read '1' operation requires a positive signal margin. Functional limit plane Within the above sampling plan, the calculated signal margin being always greater than the functional limit, with a minimum value of 1.65 (a.u.), we expect a fully functional 1Gb array (see opposite) Min signal of 1.65 found for CellRDF@-5.6σ, CellDiameter@-1.8σ and SAoffset@+1.3σ. 3-D scatter plot of signal margin vs. SA offset and RDF effect
Outline • Introduction • Principle of the Method & Theoretical Background • Application Example: Case of a 1Gb Z-RAM Floating-Body Memory Array • Conclusions and Perspectives
Conclusions & Perspectives • We propose a fast and accurate method to predict worst case configurations of parameters within large memory arrays • We have applied this method to assess the functionality of a 1Gb Z-RAM floating-body memory array with only 500 runs • The method applies, in principle, to any other memory array facing similar local fluctuation challenges (Flash, DRAM, SRAM…) • There are several ways to improve this method: • Confidence intervals can be calculated for the response • Extremum search refinement (better accuracy for the prediction) • By numerical methods: search for local extremum • Analytical solutions in particular cases – linear combinations of parameters for example • Use of “Importance Sampling” methods to have the full distributions and to take into account non-normal input parameters and/or non-monotonic responses
Elements of Bibliography • S. Akiyama et al., "Concordant memory design using statistical integration for the billions-transistor era", ISSCC Dig., p. 466, 2005 • J. Yeung and H. Mahmoodi, "Robust sense amplifier design under random dopant fluctuations in nano-scale CMOS technologies", IEEE International SOC Conference, p. 261, 2006 • D.P. Bertsekas and J.N. Tsitsiklis, Introduction to Probability, Athena Scientific, 2nd Edition, July 2008 • G. Marsaglia, "Choosing a point from the surface of a sphere", The Annals of Mathematical Statistics, vol. 43(2), p. 645, 1972 • J.S. Kim et al., "Vertical double gate Z-RAM technology with remarkable low voltage operation for DRAM application", Symp. VLSI Technology Dig., p. 163-164, 2010 • P. Magnone et al. "Matching performance of FinFET devices with Fin widths down to 10 nm", IEEE EDL, vol. 30(12), p.1374, Dec. 2009 • D. Reid et al., "Analysis of threshold voltage distribution due to random dopants: a 100000-sample 3-D simulation study", IEEE TED, vol. 56(10), p.2255, Oct. 2009 • S. Toriyama et al.,”Device simulation of random dopant effects in ultra-small MOSFETs based on advanced physical models", SISPAD Dig., p.111, 2006
Cumulative Probability Calculation [1/4] Demonstrations below are made using standardized variables X, Y: y Hence, the joint PDF reads: And the iso density curve reads: M’ Y M x X O Z z • Reasoning to find the cumulative probability corresponding to a given pair (X,Y): • The point M(X,Y) belongs to the iso-density circle of radius Z. We want to calculate PZ, the cumulative probability corresponding to this event (M,Z), for all events beyond this circle • Z being known, X and Y are not independent, hence calculating the cumulative probability requires taking that into account: the related PDF to be integrated is actually a conditional distribution function that can be expressed as a function of a unique variable, to be found • The cumulative probability is then the integral of this function between the position M and + along this unique variable • On the other hand, the integral is obtained by summing the PDF f(x,y) along the successive intervals z between iso-density circles centered around the origin point O • Moving from the circle of radius Z to the circle of radius Z+z transforms M into M’ along the vectorial direction <OM> this is giving the direction along which to integrate the PDF
Cumulative Probability Calculation [2/4] • With the above considerations, the cumulative probability PZ can be defined as the integral of the conditional distribution along x (fX|Y), with y being dependent on x through the linear equation y=(Y/X).x: • Observing that this is equivalent to a sum along a radius of direction <OM>, it is then more natural to express this integral as a function of the radius z: p being the conditional distribution along the direction <OM>
Cumulative Probability Calculation [3/4] • Based on that, it is more convenient to express the PDF in cylindrical coordinates: • In this coordinate system, the condition on the direction <OM> is expressed as a constant angle M Hence: • The expression of the conditional distribution p is found by applying the Bayes’ Theorem on PDFs: where pM is the marginal probability defined as:
Cumulative Probability Calculation [4/4] • The marginal probability pM can be calculated: • Therefore, the conditional distribution function reads: • Finally, the cumulative probability is: which is the cumulative probability of the unidimensional standard normal distribution • A similar reasoning can be made in an n-space using hyperspherical coordinates and integrating along a radius of the hypersphere. The result is the same.