3-D IC Standardization Begins – Perspectives From the Leading Edge | Blog on Semiconductor International
Archive for the ‘ Packaging Technology ’ Category
(01/22/2010 5:24 $� EST)
| SAN JOSE, Calif. — Another analyst sees delays for 450-mm fabs and extreme ultraviolet (EUV) lithography–a possible sign that Moore’s Law is in danger of slowing down.
On Thursday (Jan. 21), IC Insights Inc. indicated that there could be delays for two chip-scaling enablers: 450-mm fabs and EUV. Another emerging chip-scaling technology, 3-D devices based on thru-silicon vias (TSVs), remains in the embryonic stages and is ”overhyped,” said Trevor Yancey, an analyst with IC Insights.
Gus Richard, an analyst with Piper Jaffray & Co., also sees delays for 450-mm fabs and EUV. ”We believe that the transition to EUV will (be) challenging at best, unaffordable at the worst and likely significantly delayed,” Richard said in a new report. ”The alternative cost reduction path is larger wafers (450-mm). However, equipment companies are unwilling to fund the R&D for 450-mm development.”
What does that all mean? Perhaps a slowdown in the two-year process technology cycle. ”The underlying economic engine of the semiconductor industry is Moore’s Law and the price elasticity it provides. If the cadence of Moore’s Law slows, we think the growth rate of the semiconductor industry would slow as well,” he warned.
The current recession has delayed the possible transition to the next-generation 450-mm wafer size. 450-mm fabs were supposed to happen in the 2012-to-2014 time frame.
There are some return-on-investment (ROI) issues for fab tool makers. Simply put, the fab tool customer base for 450-mm is too small. The R&D is too costly. ”We estimate that a 450-mm fab in 5-10 years will cost somewhere between $8 billion and $12 billion. In our view, only 2 to 5 companies that will be able to make the transition to a 450-mm due to the high cost,” Richard said.
EUV is also in trouble. On the lithography front, today’s immersion lithography technology is enabling devices down to the 3x-nm node, maybe even the 2x-nm node. Lithography is the crucial technology that drives scaling or Moore’s law, he said.
EUV is supposed to be inserted at the 16-nm logic node in 2013. IC Insights believes EUV will be delayed and may be inserted at the 13-nm node in 2015 or 2016.
”The transition to EUV lithography may take longer and cost more than is expected,” Richard warned. ”NAND and DRAM suppliers will need a production EUV tool by 2012 or 2013 and Intel would like to have EUV by 2014. We estimate that ASML will ship 4 or 5 beta tools in 2010, and it has indicated that these tools will be ready for production in 2012. However, based on our conversations with industry contacts, many believe that EUV will not be ready until 2014 or 2016.”
So what will the industry do instead? ”We believe that the current generation of immersion lithography tools will allow Intel to move to 16-nm and NAND flash suppliers to move to 22-nm, the foundries to move to 28-nm and DRAM manufacturers to move to the 2x-nm nodes,” he said.
”Based on our conversations with lithography experts, double or triple patterning in combination with computational lithography could extend immersion lithography to the 2x-nm node for most manufacturers,” he said. ”We believe that Intel will be able to push immersion lithography to 16nm. However, the extension of immersion to 22-nm and below is likely to add to the cost and complexity of the current immersion lithographic process, potentially making immersion at advanced nodes uneconomical.”
Not all agree, namely ASML Holding NV and Nikon Corp. Both are developing EUV tools.
”ASML is making the bet on EUV; we believe that it is a bold and high stakes bet. We believe that it is too early to predict EUV’s success or failure and more will be known as beta systems are installed in the second half of 2010,” Richard pointed out.
lectrostatic discharge (ESD) occurs when objects – including people, furniture, machines, integrated circuits or electrical cables – become charged and discharged. Electrostatic charging brings objects to surprisingly high potentials of many thousands of volts in ordinary home or office environments. ESD produces currents which can have rise times less than a nanosecond, peak currents of dozens of Amps and durations that can last from tens to hundreds of nanoseconds. Unless ESD robustness is included during design, these current levels can damage electrical components and upset or damage electrical systems from cell phones to computers.
ESD tests ensure that electrical components and systems can survive the ESD stresses that they will encounter. Active components such as integrated circuits and transistors are tested using the Human Body Model (HBM)1 and the Charged Device Model (CDM)2 to ensure they can be handled without damage during manufacture in a controlled ESD environment. Systems are tested for use in non-ESD controlled environments according to IEC 61000-4-23.
A common drawback of ESD tests is the limited information they return. A component or system is stressed at a voltage level, and the unit either survives the stress or it does not. There is no further information. In 1985 T. Maloney and N. Khurana4 introduced Transmission Line Pulse (TLP) as a way to study integrated circuit technologies and circuit behavior in the current and time domain of ESD events. The method became indispensable for integrated circuit ESD protection development, especially after Barth Electronics introduced the first commercial TLP system in the mid-1990s.
|Figure 1. This simplified schematic of a time domain reflection TLP system is an idealized view of the captured waveforms and how they are used to build up a TLP I-V curve. (Click to enlarge)|
Time Domain Reflection TLP
Time Domain Reflection (TDR) TLP with a 100 ns pulse length is the most common version, shown schematically in Figure 1. A 50? transmission line is charged through a high value resistor. The length of the transmission line determines the length of the pulse. Flipping switch S initiates a pulse which travels on 50? cable, through an attenuator, to the device under test (DUT), and reflects from the DUT back to the attenuator. The 50? 10X attenuator prevents multiple reflections. Voltage and current probes between the attenuator and the DUT capture the pulse waveforms on a single shot digital oscilloscope.
The voltage and current at the DUT is the sum of the incident and reflected pulses. For 100 ns TLP systems, the incident and reflected pulses overlap at the voltage and current probes. Therefore, the oscilloscope directly measures DUT voltage and current in the pulse overlap region. The measurement of a voltage current pair is illustrated in Figure 1 for a <50? DUT. The voltage current pair provides a single point on an I-V curve. A full I-V curve for a DUT is mapped out by charging and discharging the transmission line at progressively higher voltages. Commercial 100 ns TLP systems produce current pulses from 1 mA up to 10A or 20A into a short. Most TLP systems can also measure DC leakage after each pulse, allowing the system to detect damage to the sample.
Example of TLP Use
Figure 2 illustrates TLP measurements on a simple circuit element — a grounded gate nMOS transistor. Grounded gate nMOS transistors are often used as protection elements within CMOS ICs. An nMOS specifically designed for ESD can carry considerable current without damage. Without proper design, nMOS transistors are very sensitive to ESD. Figure 2a illustrates TLP stress applied to the drain relative to the grounded source with the gate tied to the source. Figure 2b is a typical TLP I-V curve of an nMOS transistor. At low TLP stress, the transistor is off and no current flows. When the stress voltage reaches the avalanche breakdown of the drain, current begins to flow. At Vt1, It1, sufficient current flows to turn on the parasitic bipolar transistor formed by the drain (collector), substrate (base) and source (emitter). The turning on of the bipolar transistor results in a drop in voltage, often called bipolar snapback. The bipolar region is characterized by Vsb, the snapback voltage, and the resistance of the snapback region, R. The snapback region ends at the second breakdown point, Vt2, It2.
|Figure 2. Sample TLP I-V and leakage evolution on a grounded gate nMOS device. (Click to enlarge)|
The TLP I-V is most useful when combined with the leakage measurements in Figure 2c. After each TLP pulse, the leakage of the nMOS is measured. The leakage is plotted on the X axis, and the pulse current on the Y axis. The Y axis scale for Figures 2b and 2c are the same, allowing easy comparison. Figures 2b and c show that the transition from avalanche to snapback at Vt1, It1 results in no increase in leakage. The second breakdown transition at Vt2, It2 does result in device damage. The parameters in Figure 2b provide a great deal of information about the ESD properties of the nMOS. Vt1 is the voltage needed to trigger the protection properties of the nMOS. Vsb and R can be used to predict voltage drops across the nMOS during an ESD event. It2 is a measure of the transistor’s ability to carry current during an ESD event.
TLP is an indispensable tool for understanding the electrical properties of integrated circuits at the times and current levels of ESD events. In the study of HBM, 100 ns pulses are used and recently, 5 ns and shorter Very Fast TLP (VF-TLP) pulses have explored the CDM time scale. TLP can be used on individual circuit elements, input and output buffers, and full integrated circuits. In addition to measuring I-V curves, TLP can be used to study time dependence and turn on time.
Robert Ashton joined ON Semiconductor in 2007 in the discrete products division as a senior protection and compliance specialist after three years as director of technology at White Mountain Labs, a provider of ESD and latch-up testing of integrated circuits. He has published numerous articles on ESD testing of integrated circuits, test structure use in integrated circuits and CMOS technology development.
1 JEDEC JESD22-A114D Electrostatic Discharge (ESD) Sensitivity Testing Human Body Model (HBM.)
2 JEDEC JESD22-C101C Field-Induced Charged-Device Model Test Method for Electrostatic-Discharge-Withstand Thresholds of Microelectronic Components
3 IEC 61000: Electromagnetic compatibility (EMC) Part 4-2: Testing and measurement techniques Electrostatic discharge immunity test.
4 T Maloney and N Khurana, “Transmission Line Pulsing Techniques for Circuit Modeling of ESD Phenomena,” in Proceedings of the EOS/ESD Symposium 7, (Minneapolis, MN: ESD Association: 1985): 49–54.
Transmission Line Pulse testing, or TLP testing, is a method for semiconductor characterization of Electrostatic Discharge ESD protection structures. In the Transmission Line Pulse test, high current pulses are applied to the pin under test PUT at successively higher levels through a coaxial cable of specified length. The applied pulses are of a current amplitude and duration representative of the Human Body Model HBM event or a Charged Device Model – CDM – event in the case of Very Fast TLP, or VF-TLP. The incident and reflected pulses are evaluated, and a voltage-current V-I curve is developed that describes the response of an ESD protection structure to the applied TLP stresses. The Transmission Line Pulse test is unique because the current pulses can be on the order of Amps, and the TLP test results can show the turn-on, snap-back, and hold characteristics of the ESD protection structure.Transmission Line Pulse testing is useful in two very important ways. First of all, TLP may be used to characterize Input/Output I/O pad cells on test chips for new process technologies and Intellectual Property IP. TLP is very useful in developing simulation parameters, and for making qualitative comparisons of the relative merit of different ESD protection schemes for innovative pad cell designs. Secondly, TLP may be used as an electrical failure analysis tool, often in combination with conventional, standards-based component ESD testing.A standard practice for TLP testing is available on the ESD Association web site at http://www.esda.org. The TLP test method is: ESDA SP5.5-2003When qualifying a TLP test service, several important factors should be considered: a the technical expertise and experience of the engineering staff, b the quality practices, such as ISO 17025 accreditation, and c the availability of industry-standard ESD test services on-site.
POSTED BY: Sally Adee // 화, 12월 15, 2009
The time honored trope of teen movies is the mousy nobody who finally takes off her glasses and lets down her ponytail, and suddenly she’s the prom queen. In the semiconductor industry’s version of that movie, that girl’s name is Packaging.
Packaging was the undercurrent of much of this year’s International Electron Devices Meeting. No one could have put it better than Semiconductor Industry Association vice president Pushkar Apte, who stated that “packaging is the red-headed stepchild” of the industry. Until now, anyway. Two major forces are driving the attention back to packaging: Medical applications and the end of scaling.
As an example of the former, at IEDM, Purdue University researchers showed implantable wireless transponders that can monitor radiation doses received during cancer treatments. The miniature transponders would be implanted near a tumor during radiation therapy. The part is a prototype, as far as I understand, and the Purdue researchers are working with the radiation oncology department at the University of Texas Southwestern Medical Center. There, doctors can give them an idea of what’s needed in terms of packaging. But what happens when a part like this transitions from prototype to off-the-shelf? It’s going to need innovative packaging. That’s what.
The second driver is the ever-impending end of Moore’s law. It’s no secret that engineers are running out of options with transistor scaling. The industry is nominally at the 32nm process—which means Intel is about to start shipping microprocessors with 32-nm feature sizes. No one else is. (Intel will soon release their 32-nm processor, called Westmere.)
But other chipmakers are struggling to keep up with that roadmap. AMD only released its first 45-nm processors this past January. According to EETimes, “a period of more than two years is now expected between the introduction of AMD’s 32nm technology and the previous 45nm node first seen in late 2008.”
TSMC is also lagging behind Intel but ahead of AMD with 32-nm process technology, which it expects to have ready in 2010. (For more on where everyone stands with 32-nm process technology, read this exhaustively researched EE Times piece.)
Why is it so hard to scale? Researchers agree that the industry has hit a brick wall because scaling transistors to ever-tinier dimensions causes reliability to fall steeply. Researchers who didn’t want to go on the record told me, and at a short course on Sunday, attendees repeatedly expressed frustration at the difficulties of further scaling.
3D integration looks like a viable alternative for chipmakers who don’t want to bang their heads against Moore’s law in the quest for 22-nm process technology. 3D integration boils down to this: stack ‘em vertically instead of squeezing more and tinier transistors on a planar surface. It means that with a fixed transistor and die size, you can still add processors and memory. Johns Hopkins University electrical engineering professor Andreas Andreou estimated that by the time the industry arrives at 22-nm process technology, it would be more effective to stack four 22-nm chips than press on to the 11-nm node.”The gold rush of shrinking will be replaced by 3D,” he predicted.
Even Nvidia is on the 3D bandwagon: John Chen said in his keynote presentation that graphics processors can’t make progress unless they go 3D. Two IEDM sessions were devoted entirely to advanced 3D technology and processing for memory and logic. In one session chaired by researchers from IBM and Samsung, CEA-LETI researchers threw down the gauntlet: For the first time, they said, 3D CMOS integration can be considered a viable alternative to sub-22nm technology nodes. TSMC researchers positioned 3D integration as healthy competition for the 28-nm node. IMEC, Fujitsu, and ST Micro presented their research into making 3D work.
Researchers are divided on the severity of the issues that plague 3D integration: heat, alignment, and metal contamination still remain, but according to Hopkins professor Andreou and NEC researcher Yoshihiro Hayashi, heat is a red herring: any number of innovations will easily solve the heat problem by the time 3D packaged wafers are ready to hit the shelves (among these, using through-silicon vias to transport the excess heat to the heat sink, but that’s a whole other story).
In any case, the general assumption is that you can work around the Moore’s law limitations by doing other things, like 3D integration. At the very least, 3D chip integration might buy the industry a little time so that researchers can get their ducks in a row with promising technologies like extreme ultraviolet lithography, multigate transistors, and 2nd-gen high-k metal gate technology.
But we’re not at the prom yet. (We’re still watching the part of the movie where the best friend realizes that our girl Packaging needs a haircut and a full face of makeup.)
You’ll note that most of the problems researchers described are about packaging. Many ingredients in 3D stacks rely on innovations in packaging to make them viable. To solve the heat problem, for example, researchers are assuming that new ways of diverting excess heat to the heat sink will be developed. But who’s going to figure that out? Are through-silicon vias part of the chip or part of the packaging? What about those heat sinks?
3D chips require new kinds of packaging. And new kinds of packaging require innovation. And that, at last, is the crux of the problem: innovations in packaging? Whose problem is that?
The semiconductor industry has disintegrated, over the past decades, into many horizontal layers. Consider how the chip in your laptop got there. A designer at a fables semiconductor company probably designed it and then sent it to TSMC. TSMC manufactured the chip based on those designs and sent it to the packaging company, which packaged the chip and sent it to the systems guy, who put it all together and sent it to its final destination, the end unit provider.
Now companies are finding that they need to re-integrate at the leading edge. Some fabless companies have said that in order to get the packaging they want, they need to invest in packaging startups.
That disintegration/reintegration dynamic raises a question: who across these companies has ownership, with all the rewards and liabilities that word implies? If packaging becomes more important and plays a bigger role in chip design and innovation, it will needs to address, particularly for medical applications, issues of heat, reliability, and safety.
The packaging industry as a whole sees about $20 billion in revenue each year. Contrast that with Intel alone, which pulls down $40 to $50 billion a year. Additionally, chipmakers on average pump almost 20 percent of their income back into R&D. Contrast that again with R&D spending by packaging companies. ASE–the biggest packaging behemoth, which brings in about $3.5 billion a year–is the record holder among its cohort for how much it spends on research and development: 3.2 percent. 20 percent of $40 billion is a lot, and that’s probably why Intel is going to be the first to ship 32-nm processors. 3.2 percent of $3.5 billion? Well, it’s not enough for any kind of risky, out-of-the-box innovations. The industry is just going along to get along.
Who can blame them? Why should they absorb the risks that will plague any kind of innovation in packaging? Innovation in packaging also implies liability. Just look at what happened to Apple last year when Nvidia famously screwed up its GeForce GPUs. Apple had to replace the faulty chips for free. The problem was traced to a packaging defect.
Microsoft had to write down its first Xbox chips because of packaging issues that led to the infamous “red rings of death“– to the tune of $1 billion.
And if you’re still not convinced, think about the potential liability in medical implants.
Right now, no one is in a position to be responsible for innovation in packaging, but innovation is sorely needed. Someone needs to step up and give this poor girl a makeover.
Qualcomm Director of Advanced Technology Matt Nowak outlined the cost and technology challenges facing 3-D interconnects in a speech at an IEEE 3-D IC conference. “If this technology adds more than 10% to final costs, it will not be widely used in high-volume wireless technology,” he said.
Phillip Garrou, Contributing Editor — Semiconductor International, 10/6/2009
In a plenary speech at the IEEE 3-D IC conference in San Francisco, Qualcomm Inc. (San Diego) Director of Advanced Technology Matt Nowak said 3-D interconnects face plenty of issues that must be dealt with before the benefits of the approach can be realized.
“While 3-D with TSVs currently has significant industry momentum, more development work is needed to bring this technology to high-volume manufacturing,” Nowak said, adding that TSV (through-silicon via) development and characterization needs to move to leading-edge CMOS, containing strained transistors, ultralow-k dielectrics, and thin die.
Although 300 mm equipment installations are beginning worldwide and test chips are being reported, Nowak noted that a number of issues need to be overcome, including:
• Lack of 300 mm lines in production
• Lack of standard process flows
• Unproven yield/reliability
• Unclear supply chain handoffs
• Lack of consensus on cost targets
The attraction of TSVs is apparent for mobile wireless devices looking for low-cost solutions that improve power efficiency while enhancing performance in terms of bandwidth/milliwatt. Noting that Qualcomm today relies on stacked bare die using wire bond and flip-chip, Nowak said 3-D TSV technology would enable “new architectural solutions that can only be realized with such high-density tier-to-tier connections.”
Many potential 3-D IC users are clamoring for immediate standardization, but Nowak said it may be too early to standardize the technical solutions. Standards eventually will be needed for:
• TSV size, tier thickness, via fill material
• Tier-to-tier pin locations and assignments
• Microbump and passivation materials, properties and geometries
• Reliability test methods
Nowak indicated that foundry TSVs, in which the vias are created in the middle of the process flow, made the most sense and would probably end up being the high-volume manufacturing technology of choice.
Although it is still not resolved where the handoff point will be between the foundry and the outsourced semiconductor assembly and test (OSAT) supplier, Nowak pointed out that handle wafer mounting and dismounting must be done by the same group.
After studying the the cost of ownership models of IMEC, Sematech and EMC-3D, Qualcomm derived its own preliminary economics and determined that the overall cost is dominated by post-fab backside processing. One of the technical conclusions the company reached from its cost modeling is that “thinner is better” — going from 50 µm to 20 µm thick layers could reduce the TSV module portion of the total cost by as much as 25% if the added thin wafer handling costs were not substantial.
Nowak said cost will determine the extent of 3-D IC product adoption. “If this technology adds more than 10% to final costs, it will not be widely used in high-volume wireless technology.”