Topics 1994 ARPA essays
Semiconductor design and manufacturing Research papers of note
|
DFM: It's the pattern
|
© Lance A. Glasser 2007-2010 All Rights Reserved |
Prior to 1978 when Carver Mead and Lynn Conway came out with their paradigm-shifting book on VLSI Systems, all integrated circuit designers knew about circuit design, device physics, and semiconductor processing. Integrated circuits were designed by electrical engineers. Afterwards, the design of integrated circuits became more and more the realm of computer scientists. Carver knew that this shift had to occur. He was, after all, the second author on the first paper on the fundamental limit of the MOS transistor (even he was too conservative, by more than an order of magnitude). What Carver knew was that we would need more and more abstraction barriers between the designers of integrated circuits and the underlying physics. The physics was already too complicated in 1978 for most designers and it was getting steadily worse. Now that we are building transistors with gate lengths a factor of 10 smaller than Carver predicted would ultimately be possible—he put the “fundamental” limit near 300 nm—we are doing things to the transistor that no one imagined at the time. The trend toward smaller devices was clear, but more compelling was the trend in transistor count that forced designers to think at higher and higher levels of abstraction: logic, register transfer, micro-architectural elements, IP blocks, and so forth. No way could designers worry about transistor physics. This worked for a long time. Before Mead and Conway, Dennard of IBM came out with a classic paper on transistor scaling and, while it wasn’t followed religiously, it did set the general scaling roadmap of smaller devices, faster switching times, lower switching energy, and smaller supply voltages. But with each scaling factor came new challenges. Physics that were there in earlier years but negligible, became important. NMOS power dissipation caused that technology to be replaced by CMOS. Clock skew from global wiring caused the adoption of phase lock loop technology on chip. Wire resistance and capacitance, as well as metal migration, helped drive the need for multiple levels of metallization and a change from aluminum wire to copper, and from polysilicon to silicided local interconnect. Transistors became more and more complex, with special drain profiles, ultrashallow junctions, silicon on insulator, shallow trench isolation, and now even high-K dielectrics to deal with tunneling and strained silicon to improve mobility. More is on the way as we seem to have an unwritten contest to put as many non-radioactive elements of the periodic table into the integrated circuit as possible. None of this, however, has broken the abstraction barriers (though some bending did, of course, take place). What are those abstraction barriers? Basically design and manufacturing became different worlds with “simple” agreed upon rules for commerce across the interface. There were two sets of rules: the so-called “design rules” that dealt with the physical or geometric and “SPICE models” that dealt with the electrical. Design rules were simple statements about the legal geometric constructions that, if followed, would “guarantee” manufacturability. That is, it was close to WYSIWYG (what you see is what you get). They slowly evolved to incorporate primitive DFM situations. For instance, before there were barrier metals for contacts, people found that silicon dissolved in aluminum, resulting on contact punch through to the substrate when a single contact existed on a large aluminum bus, which caused rules about using multiple contacts. Other rules dealt with issues such as the stress relief on corners through chamfering. These were the precursors to rules eventually incorporated in resolution enhancement techniques, and in many cases it actually complicated things in strange ways. The “SPICE decks” (decks being an anachronism left over from the age of IBM punch cards) were models of how the transistors behaved at the process corners. “Process corners” are a shorthand for talking about the variation in device performance that could occur with statistical likelihood over the manufacturing process. There were slow n-channel transistors and fast ones. Slow p-channel transistors and fast ones. Implicit in this was the assumption that all of the n-channel transistors on a chip behaved the same way and the same held for the p-channel devices. The integrated circuit needed to work over the whole slow to fast range, neither overheating in the fast-fast case nor missing timing specifications in the slow-slow case. Signal integrity issues often occurred at slow-fast or fast-slow corners. It is interesting to note that in the geometrical (or physical) case we used rules and in the electrical case, which was already too complex for rules even in the 1970’s, we used models. This will come up later in our discussion. In any case, design rules and SPICE decks were “Design for Manufacturing,” DFM, in the 1970’s and 80’s. They were the contract between design and manufacturing, “If you follow these design rules to create any legal pattern and it simulates correctly with these models, you will get what you asked for.” And all of the exotic transistor structures we have invented and all of the new materials we have added and all of the additional layers of wire that have added, did not break those abstraction barriers. Clean contracts could still be written. So what happened? Why are we talking today about DFM as if it is new? The problem is with the geometric rules and the patterns that form the integrated circuit. At a basic level, the wafers is where the physics happens. That is where the transistors and interconnect networks are. We will call this the “wafer plane.” The rules are written for the designer in the “design plane,” or GDSII level. When I started in this field, with 6 μm technology, the design rules could be illustrated on two pages. Today there are thousands and thousands of rules. Worse than this, design rules are now being augmented by thousands of design “suggestions.” What we can know with confidence is that rule sets this large contain bugs and that computer programs to implement these rules will introduce more bugs. In the parlance of inspection, the process of creating a mistake-free layout pattern has naturally occurring defects. Even if you follow all of the design rules, some patterns may not yield. In some cases the problem may even be over-constrained by contradictory rules. Said another way, for the last quarter century the basic contract between design and manufacturing has been that, if you follow the rules, the manufacturing of integrated circuits is pattern independent. Today they are not. Key sources of pattern-dependent manufacturing include
In addition, there are spatial variations across a chip that means that not all transistors of a given flavor are the same. That is design corner analysis is often plagued by too simplistic a model. These in-die variations drive the need for in-die metrology and in-die metrology drives a need for design-driven metrology. Part of the problem is a reflection of the anachronistic approach of depending largely on rules for layout in the design plane. It is possible to write fairly simple geometric design rules for today’s integrated circuit, but you would need to write them in the wafer plane, not the design plane. The transformation from the design plane to the wafer plan has gotten too complex. It needs to be done with models. Historically, when a rule-based methodology becomes insufficiently expressive, one must graduate to model-based approaches. For example, model-based OPC is replacing rule-based OPC, though rule-based starting points are often used. I believe that a winning EDA product to be Geometric Rule Checking in the Simulated Wafer Plane. Ideally that simulation would encompass process variations. (Note that the calibration of variations tends to take mountains more data than calibration of a mean. Statistians know that it is fairly straight forward to find a mean value with high confidence. Characterizing a standard deviation, on the other hand, take tons of measurements to get a reasonable confidence. This has implications on the metrology market and its connection to DFM.) Optical Rules Checkers (ORC) are a one-level version of wafer-plane rule checking. But I digress. The key point is that modeling and simulation will be of increasing importance in the bridge between the geometric design and manufacturing worlds, just like it has been for decades in the circuit domain. Indeed, when people tell you that this new paradigm would create fundamental business model problems between, say, foundries and the fabless, the word “fundamental” should be challenged since this paradigm has been used in the circuit domain since before there were foundries. In the circuit domain, one uses simple models and rules for synthesis and complex models and simulation for analysis. I believe that the layout world is going to go the same way: simple design rules for synthesis and complex simulation for analysis. Check out Lance's new company Audio Everywhere
|
The opinions here do not necessarily represent the views of any past, present, or future employer.
|
|
|
||