Skip to content

Description of work for WP2: Modelling and analysis of faulty components

   For a proper evaluation of the impact that the fore-mentioned reliability problems have in circuit and system design, it is not sufficient to have models representing the mechanism and effect of a particular reliability effect in a single device or interconnect. Not even considering possible interactions with other reliability phenomena is sufficient, e.g. studying the combined impact of NBTI and SBD effects in the behavior of an SRAM cell. The real problems need to be evaluated in the context of the particular circuit where the device/interconnect subject to degradation is situated. The fact that a progressive degradation effect may manifest mildly when looking at each single transistor/wire separately does not provide any information about its impact on the circuit level performance metrics. For instance, oxide breaks manifest themselves as a slight increase in the total gate leakage that may not have strong impact on the transistor current-voltage characteristics, since the drain current does not change significantly at the moment the soft oxide breakdown occurs.
   However, when looking to the interaction that the gate current increase may have with the circuit operation, although small, it can affect the parametric figures of the circuit by affecting the current of another device whose drain is connected to that gate. A typical example where small changes in the gate current of a single transistor can cause major problems at the circuit level are SRAM sense amplifiers or other circuits that work under a common mode rejection mode. Affecting the bias conditions of one of the transistors even slightly may have detrimental effects for the functionality of the circuit. Different types of circuits are much more robust toward breakdowns, for example ring oscillators can tolerate hard breakdowns on several of their transistors before they stop oscillating at the specified frequency. This means that in order to evaluate the impact that reliability degradation mechanisms have on the circuit level performance metrics we need analysis and modeling tools arranged in a flow that can take into account the context where the affected transistor/wire is operating in, thus percolate the effect all the way up from the device to the circuit, and from the circuit to the system level.

   In WP2, the focus will be the deployment and further development of the Variability Aware Modelling framework for variability modelling toolflow for Bulk CMOS, FinFET, III-V/Ge and .Carbon NanoTubes (22nm to 8nm technology node). The toolflow will be extended regarding reliability for the Bulk CMOS devices and feasibility studies regarding for variability and reliability will be conducted for III-V/Ge and Carbon NanoTubes.

   This work will be based on the pre-existing “Variability Aware Modelling” (VAM), which is IMEC/TAD’s R&D framework for “proof of concept” toolset for variability system modelling. The current version targets above 45nm designs and does not support reliability modelling. The new version of VAM developed in WP2, will have as purpose to propagate both variability and reliability information throughout the simulation environment for sub 22nm technology nodes. Information will be percolated using the “VAM - information format” (VAMIF), which is IMEC’s framework to represent variability and reliability information.

   More specifically the effort will focus on Memory VAM, which is the part of VAM that helps in answering the so-called “grand memory questions”. Here one needs to scan large trade-off spaces that cannot be handled with ad-hoc methods using spreadsheets, back-of-the-envelopes, extrapolations of small test circuitry, or ITRS statements. Such simultaneous trade-offs involve architecture (parallelism, hierarchy), circuit topologies, as redundancy, technology (node, SRAM/DRAM/NVM), manufacturing options, temperature and Vdd range, yield and field returns, power and access time, and some more.
Existing Variability Aware Modelling (VAM) framework overview. Memory VAM in TRAMS will include both variability and reliability statistical modeling
   MemoryVAM’s place in the above VAM flow is between the “compact model” abstraction level and the “cell” abstraction level. Yet MemoryVAM can also be considered as a standalone tool, where it starts from ‘some form of statistical Spice’ and results in statistical trade-off models of the full memory. MemoryVAM delivers statistical models of SRAM (eventually DRAM, NVM) macro blocks. This is a 2-step process.

• The first process is the statistical analysis in analog domain of SPICE netlists that are representative of the critical path (“donut”) of the SRAM. It delivers statistical timing/energy models of the critical path under process variation and degradation.
• The second process is translating donut results to the full SRAM. This happens under awareness of the memory architecture.

   This statistical representation implies also yield information in the form of probability of failing memory cells. The full memory model is a function of the donut via memory architecture descriptions (as number of bits per word, number of words per bank and the total number of banks) and of the redundancy options of the memory. The tool builds on top of industry standard design flows and tools for analog domain characterization such as HSPICE or SPECTRE, and runs under Matlab/Java. It uses the IMEC’s VAMIF way of representing variability and its correlation with geometry and correlation between parameters.