Design for reliability through engineering optimization

The pursuit of Moore’s Law, in term of improving transistor performance, simply by reducing transistor geometry (e.g. oxide thickness reduction, gate length reduction etc.) has come under technical challenge of meeting the performance requirement since the 130nm technology node. (e.g. copper interco...

Full description

Saved in:
Bibliographic Details
Main Author: Ng, Wee Loon
Other Authors: Tan Chuan Seng
Format: Theses and Dissertations
Language:English
Published: 2015
Subjects:
Online Access:http://hdl.handle.net/10356/63945
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:The pursuit of Moore’s Law, in term of improving transistor performance, simply by reducing transistor geometry (e.g. oxide thickness reduction, gate length reduction etc.) has come under technical challenge of meeting the performance requirement since the 130nm technology node. (e.g. copper interconnect was first introduced for 130nm technology node) To sustain Moore’s Law, manufacturers have gone beyond the conventional geometry scaling into “artificial” scaling by using new materials in order to fabricate transistor with improved performance. Besides the use of new materials to improve the transistor performance from one technology to another, the need to pack more transistors into a given space has also pushed the technology geometry to the nanoscale dimension which resulted in very stringent requirements in the manufacturability and variation control capability of the fabrication tools. To keep pace and drive for success in this dynamic and challenging environment, there is a need for a change in the conventional method of developing, qualifying and controlling a technology so as to optimize the limited resources of time and cost as we pushed for fast time-to-market without compromising on reliability and quality. For older technology where the materials properties are well-understood and having good process margin for manufacturing, technology qualifications usually focus on the intrinsic aspect of process reliability. In most cases, the intrinsic process reliability is tested, qualified and subsequently monitored on test structures from limited qualification and monitoring lots. However for advanced technology with narrow process margin, the actual reliability performance must be evaluated using large volume of data so that the impact of variation on reliability can be studied and understood as early as possible in the development phase. To ensure robust reliability especially for advanced technology, it is critical to have a more proactive approach in understand the intrinsic reliability performance, the impact of process variation on reliability and to control the key process parameters that are affecting reliability before actual product qualification. This proactive approach will help to prevent situations where products in the field fail reliability due to narrow reliability margin worsen by process variation resulting from unique product design to process sensitivity. The proactive approach also helps to prevent process changes due to reliability failures from being introduced late in the development phase to improve process and reliability margin will have a significant impact to the time-to-market and increase the development cost. This work proposes a paradigm change in reliability qualification and monitoring methodology aim at established a linkage between the variation in the key process parameters and product reliability performance. Through this approach the objective is to enable the study of reliability robustness using a systematic approach from the early technology development phase to the high volume manufacturing phase of the product in an effective and efficient manner. To establish this linkage between the key process parameters and their impact on product reliability, an innovative Design-For-Reliability (DFR) using engineering optimization methodology is proposed. In this method, the key process parameters affecting reliability is investigated through a set of reliability test structure with build-in variation (DFR test structures), to understand the impact of process variation to reliability. Given the constraint of limited space on the test chip and limited testing resources, the DFR test structures allow the collection of large volume of reliability data in an effective and practical manner through the use of fast wafer level reliability testing method such as isothermal electromigration (EM) etc. With the large volume of reliability data being collected from the DFR test structures with known variation build-in, it could be used to model the impact of process variation to reliability in a systematic manner. In this thesis we have shown that using the newly proposed Design-For-Reliability (DFR) through engineering optimization methodology, we have established for the first time a methodology that enables the linkage between the process parameters variation and its impact to reliability to be systematically studied. Using large volume of process parameters data, a DFR Model is established which enable the impact of process variation on reliability to be systematically studied and potential reliability weakness due to process variation to be identified in the factory before delivery to customer. This proactive method of reliability qualification and monitoring to study the impact of process variation on reliability not only represent a paradigm change in the conventional way we looked at reliability, but more importantly it enables a systematic approach to drive for robust reliability and variation tolerant reliability from the development to the production phase of the technology.